661 |
Document distribution algorithms for distributed web servers伍頌斌, Ng, Chung-pun. January 2002 (has links)
published_or_final_version / Computer Science and Information Systems / Master / Master of Philosophy
|
662 |
Improving the performance of distributed multi-agent based simulationMengistu, Dawit January 2011 (has links)
This research investigates approaches to improve the performance of multi-agent based simulation (MABS) applications executed in distributed computing environments. MABS is a type of micro-level simulation used to study dynamic systems consisting of interacting entities, and in some cases, the number of the simulated entities can be very large. Most of the existing publicly available MABS tools are single-threaded desktop applications that are not suited for distributed execution. For this reason, general-purpose multi-agent platforms with multi-threading support are sometimes used for deploying MABS on distributed resources. However, these platforms do not scale well for large simulations due to huge communication overheads. In this research, different strategies to deploy large scale MABS in distributed environments are explored, e.g., tuning existing multi-agent platforms, porting single-threaded MABS tools to distributed environment, and implementing a service oriented architecture (SOA) deployment model. Although the factors affecting the performance of distributed applications are well known, the relative significance of the factors is dependent on the architecture of the application and the behaviour of the execution environment. We developed mathematical performance models to understand the influence of these factors and, to analyze the execution characteristics of MABS. These performance models are then used to formulate algorithms for resource management and application tuning decisions. The most important performance improvement solutions achieved in this thesis include: predictive estimation of optimal resource requirements, heuristics for generation of agent reallocation to reduce communication overhead and, an optimistic synchronization algorithm to minimize time management overhead. Additional application tuning techniques such as agent directory caching and message aggregations for fine-grained simulations are also proposed. These solutions were experimentally validated in different types of distributed computing environments. Another contribution of this research is that all improvement measures proposed in this work are implemented on the application level. It is often the case that the improvement measures should not affect the configuration of the computing and communication resources on which the application runs. Such application level optimizations are useful for application developers and users who have limited access to remote resources and lack authorization to carry out resource level optimizations.
|
663 |
Applications of impedance-based fault locating methods in power systemsMin, Kyung Woo 18 September 2014 (has links)
The concentration of this work is in estimating fault locations in power systems. After describing the basic concepts of fault locating methods, this work describes improving the fault location estimates, applying the fault locating methods, and implementing the methods in a software. Every work described in the Chapter will be evaluated whether by actual field data or simulated data based on field parameters. / text
|
664 |
SQ-CSMA : universally lowering the delay of queue-based CSMA/CAGanesh, Rajaganesh 1987- 14 October 2014 (has links)
Recent works show that, by incorporating queue length information, CSMA/CA multiple access protocols can achieve maximum throughput in general ad-hoc wireless networks. In all of these protocols, the aggressiveness with which a link attempts to grab the channel is governed solely by its own queue, and is independent of the queues of other interfering links. While this independence allows for minimal control signaling, it results in schedules that change very slowly. This causes starvation and delays - especially at moderate to high loads. In this work we add a very small amount of signaling - an occasional few bits between interfering links. These bits allow us a new functionality: switching - a link can now turn off its interfering links with a certain probability. The challenge is ensuring maximum throughput and lower delay via the use of this new functionality. We develop a new protocol - Switch-enabled Queue-based CSMA (SQ-CSMA) - that uses switching to achieve both of these objectives. This simple additional functionality, and our protocol to leverage it, can be “added on'' to every existing CSMA/CA protocol that uses queue lengths. Interestingly, we see that in every case it has a significant positive impact on delay, universally furthering the performance of existing protocols. / text
|
665 |
Multi-Agent Planning and Coordination Under Resource ConstraintsPecora, Federico January 2007 (has links)
The research described in this thesis stems from ROBOCARE1, a three year research project aimed at developing software and robotic technology for providing intelligent support for elderly people. This thesis deals with two problems which have emerged in the course of the project’s development: Multi-agent coordination with scarce resources. Multi-agent planning is concerned with automatically devising plans or strategies for the coordinated enactment of concurrently executing agents. A common realistic constraint in applications which require the coordination of multiple agents is the scarcity of resources for execution. In these cases, concurrency is affected by limited capacity resources, the presence of which modifies the structure of the planning/coordination problem. Specifically, the first part of this thesis tackles this problem in two contexts, namely when planning is carried out centrally (planning from first principles), and in the context of distributed multi-agent coordination. Domain modeling for scheduling applications. It is often the case that the products of research in AI problem solving are employed to develop applications for supporting human decision processes. Our experience in ROBOCARE as well as other domains has often called for the customization of prototypical software for real applications. Yet the gap between what is often a research prototype and a complete decision support system is seldom easy to bridge.The second part of the thesis focuses on this issue from the point of view of scheduling software deployment.Overall, this thesis presents three contributions within the two problems mentioned above. First, we address the issue of planning in concurrent domains in which the complexity of coordination is dominated by resource constraints. To this end, an integrated planning and scheduling architecture is presented and employed to explore the structural trademarks of multi-agent coordination problems in function of their resource-related characteristics. Theoretical and experimental analyses are carried out revealing which planning strategies are most fit for achieving plans which prescribe efficient coordination subject to scarce resources.We then turn our attention to distributed multi-agent coordination techniques (specifically, a distributed constraint optimization (DCOP) reduction of the coordination problem). Again, we consider the issue of achieving coordinated action in the presence of limited resources. Specifically, resource constraints impose n-ary relations among tasks. In addition, as the number of n-ary relations due to resource contention are exponential in the size of the problem, they cannot be extensionally represented in the DCOP representation of the coordination problem. Thus, we propose an algorithm for DCOP which retains the capability to dynamically post n-ary constraints during problem resolution in order to guarantee resource-feasible solutions. Although the approach is motivated by the multi-agent coordination problem, the algorithm is employed to realize a general architecture for n-ary constraint reasoning and posting.Third, we focus on a somewhat separate issue stemming from ROBOCARE, namely a software engineering methodology for facilitating the process of customizing scheduling components in real-world applications. This work is motivated by the strong applicative requirements of ROBOCARE. We propose a software engineering methodology specific to scheduling technology development. Our experience in ROBOCARE as well as other application scenarios has fostered the development of a modeling framework which subsumes the process of component customization for scheduling applications. The framework aims to minimize the effort involved in deploying automated reasoning technology in practise, and is grounded on the use of a modeling language for defining how domain-level concepts are grounded into elements of a technology-specific scheduling ontology.
|
666 |
Development and Validation of Distributed Reactive Control Systems/Développement et Validation de Systèmes de Contrôle Reactifs DistribuésMeuter, Cédric 14 March 2008 (has links)
A reactive control system is a computer system reacting to certain stimuli emitted by its environment in order to maintain it in a desired state. Distributed reactive control systems are generally composed of several processes, running in parallel on one or more computers, communicating with one another to perform the required control task. By their very nature, distributed reactive control systems are hard to design. Their distributed nature and/or the communication scheme used can introduce subtle unforeseen behaviours. When dealing with critical applications, such as plane control systems, or traffic light control systems, those unintended behaviours can have disastrous consequences. It is therefore essential, for the designer, to ensure that this does not happen. For that purpose, rigorous and systematic techniques can (and should) be applied as early as possible in the development process. In that spirit, this work aims at providing the designer with the necessary tools in order to facilitate the development and validation of such distributed reactive control systems. In particular, we show how using a dedicated language called dSL (Distributed Supervision language) can be used to ease the development process. We also study how validations techniques such as model-checking and testing can be applied in this context.
|
667 |
Enabling and Achieving Self-Management for Large Scale Distributed Systems : Platform and Design Methodology for Self-ManagementAl-Shishtawy, Ahmad January 2010 (has links)
<p>Autonomic computing is a paradigm that aims at reducing administrative overhead by using autonomic managers to make applications self-managing. To better deal with large-scale dynamic environments; and to improve scalability, robustness, and performance; we advocate for distribution of management functions among several cooperative autonomic managers that coordinate their activities in order to achieve management objectives. Programming autonomic management in turn requires programming environment support and higher level abstractions to become feasible.</p><p>In this thesis we present an introductory part and a number of papers that summaries our work in the area of autonomic computing. We focus on enabling and achieving self-management for large scale and/or dynamic distributed applications. We start by presenting our platform, called Niche, for programming self-managing component-based distributed applications. Niche supports a network-transparent view of system architecture simplifying designing application self-* code. Niche provides a concise and expressive API for self-* code. The implementation of the framework relies on scalability and robustness of structured overlay networks. We have also developed a distributed file storage service, called YASS, to illustrate and evaluate Niche.</p><p>After introducing Niche we proceed by presenting a methodology and design space for designing the management part of a distributed self-managing application in a distributed manner. We define design steps, that includes partitioning of management functions and orchestration of multiple autonomic managers. We illustrate the proposed design methodology by applying it to the design and development of an improved version of our distributed storage service YASS as a case study.</p><p>We continue by presenting a generic policy-based management framework which has been integrated into Niche. Policies are sets of rules that govern the system behaviors and reflect the business goals or system management objectives. The policy based management is introduced to simplify the management and reduce the overhead, by setting up policies to govern system behaviors. A prototype of the framework is presented and two generic policy languages (policy engines and corresponding APIs), namely SPL and XACML, are evaluated using our self-managing file storage application YASS as a case study.</p><p>Finally, we present a generic approach to achieve robust services that is based on finite state machine replication with dynamic reconfiguration of replica sets. We contribute a decentralized algorithm that maintains the set of resource hosting service replicas in the presence of churn. We use this approach to implement robust management elements as robust services that can operate despite of churn.</p><p> </p> / QC 20100520
|
668 |
Utveckling och tillämpning av en GIS-baserad hydrologisk modell / Development and application of a GIS based hydrological modelWesterberg, Ida January 2005 (has links)
<p>A distributed hydrological rainfall-runoff model has been developed using a GIS integrated with a dynamic programming module (PCRaster). The model has been developed within the framework of the EU-project TWINBAS at IVL Swedish Environmental Research Institute, and is intended for use in WATSHMAN – a tool for watershed management developed at IVL. The model simulates runoff from a catchment based on daily mean values of temperature and precipitation. The GIS input data consist of maps with soil type, land-use, lakes, rivers and a digital elevation model. The model is a hybrid between a conceptual and a physical model. The snow routine uses the degree-day method, the evapotranspiration routine uses the Blainey-Criddle equation, the infiltration routine is based on Green-Ampt, groundwater is modelled assuming a linear reservoir and the flow routing is done with the kinematic wave equation combined with Manning’s equation.</p><p>The GIS and the hydrologic model are embedded in one another, allowing calculation of each parameter in each grid cell. The output from the model consists of raster maps for each time step for a pre-defined parameter, or a time series for a parameter at a specified grid cell. The flow network is generated from the digital elevation model and determines the water flow on the grid scale. The smallest possible grid size is thus obtained from the resolution of the digital elevation model. In this implementation the grid size was 50 m x 50 m. The raster structure of the model allows for easy use of data from climate models or remotely sensed data.</p><p>The model was evaluated using the River Kölstaån catchment, a part (110 km2) of the Lake Mälaren catchment, which has its outflow in central Stockholm, Sweden. The integration of the GIS and the hydrologic model worked well, giving significant advantages with respect to taking lakes and land-use into account. The evaluation data consisted of observed run-off for the period 1981 to 1991. The result from the calibration period shows a great variation in Reff (Nash & Sutcliffe) between the years, the three best years having Reff-values of 0.70 – 0.80. The Reff-value for the entire calibration period was 0.55 and 0.48 for the validation period, where again there was great variation between different years. The volume error was 0.1 % for the calibration period and -21 % for the validation period. The evapotranspiration was overestimated during the validation period, which is probably a result of excess rain during the calibration period. The results are promising and the model has many advantages – especially the integrated GIS-system – compared to the present WATSHMAN model. It could be further developed by introducing a second groundwater storage and refining the evapotranspiration and infiltration routine. Given the promising results, the model should be evaluated in other larger and hillier areas and preferably against more distributed data.</p> / <p>En helt distribuerad GIS-baserad hydrologisk modell för modellering i avrinningsområden på lokal/regional skala har byggts upp i PCRaster. Arbetet utfördes på IVL Svenska Miljöinstitutet AB inom ramen för EU-projektet TWINBAS, som har som mål att identifiera kunskapsluckor inför implementeringen av EU:s ramdirektiv för vatten. Modellen är tänkt att användas i WATSHMAN (Watershed Management System), IVLs verktyg för vattenplanering i avrinningsområden där bland annat källfördelningsberäkningar och åtgärdsanalyser ingår. Den uppbyggda modellen är en hybrid mellan en fysikalisk och en konceptuell hydrologisk modell och predikterar vattenföring på pixelnivå i avrinningsområden. Simuleringen drivs av dygnsmedelvärden för temperatur och nederbörd och modellen tar hänsyn till markanvändning, jordart, topografi och sjöar. De modellekvationer som används är grad-dagsmetoden för snö, Blainey-Criddle för evapotranspiration, Green-Ampt för infiltration, linjärt magasin för grundvatten och Mannings ekvation för flödesrouting.</p><p>Det geografiska informationssystemet och den hydrologiska modellen är helt integrerade, vilket gör att alla parametervärden beräknas för varje enskild pixel. Som utdata ger modellen en rasterkarta för varje tidssteg för en i förväg bestämd parameter, eller tidsserier över parametervärden i definierade punkter. Vattnet transporteras i ett utifrån höjdmodellen genererat flödesnätverk och vattnets flödesväg bestäms därmed på pixelnivå. Minsta möjliga pixelstorlek bestäms således utifrån höjdmodellens upplösning, och var vid denna tillämpning 50 m gånger 50 m. Modellens uppbyggnad med raster gör det enkelt att använda data från klimatmodeller eller fjärranalys.</p><p>Avrinningsområdet för Kölstaån, ett biflöde till Köpingsån i Mälardalen, har använts för att utvärdera modellen. Integreringen av GIS och hydrologisk modell fungerade mycket väl och gav stora fördelar t ex vad gäller att ta hänsyn till sjöar och markanvändning. Modellen kalibrerades med data från åren 1981 till 1986 och det erhållna volymfelet var då 0,1 % och Reff-värdet (Nash & Sutcliffe) 0,55. Stora variationer erhölls dock mellan åren; för de tre bästa åren låg Reff-värdet mellan 0,70 och 0,80. Ett mycket kraftigt nederbördstillfälle samt regleringar i huvudfåran av vattendraget ligger troligtvis bakom de mindre väl beskrivna åren. Även under valideringsperioden (1987 till 1991) fungerade modellen väl, så när som på att avdunstningen överskattades på vårarna (antagligen beroende av det stora regnet under kalibreringen), och Reff-värde och volymfel hamnade på 0,48 respektive -21 %, även här med stora variationer mellan åren. Resultaten är lovande och modellen har många fördelar jämfört med den nuvarande WATSHMAN-modellen. Den skulle kunna förbättras ytterligare genom att dela upp grundvattnet i två magasin samt förfina evapotranspirations- och infiltrationsrutinerna. Den höjdmodellsbaserade modellen bör utvärderas även i andra mer kuperade områden samt mot mer distibuerade data.</p>
|
669 |
Calculabilité et conditions de progression des objets partagés en présence de défaillancesImbs, Damien 12 April 2012 (has links) (PDF)
Dans un système distribué, différents processus communiquent et se synchronisent pour résoudre un calcul global. La difficulté vient du fait qu'un processus ne connait pas les entrées des autres. Nous considérons ici un système asynchrone: on ne fait aucune hypothèses sur les vitesses d'exécution relatives des différents processus. De plus, pour modéliser les pannes, nous considérons que les processus peuvent crasher: ils peuvent arrêter leur exécution à n'importe quel endroit de leur programme. Dans l'étude théorique des systèmes distribués, les problèmes doivent être considérés selon deux aspects: la sûreté et la progression. La sûreté définit quand une valeur de sortie est correcte. La progression définit dans quelles conditions un processus doit terminer une opération, indépendamment de la valeur qu'il choisit comme sortie. Cette thèse se concentre sur les liens entre calculabilité et conditions de progression des objets distribués. Dans un premier temps, nous introduisons et étudions la notion de conditions de progression asymétriques: des conditions de progression qui peuvent être différentes pour différents processus du système. Nous étudions ensuite la possibilité de fournir des abstractions dans un système donné. La question de l'équivalence de modèles de systèmes est ensuite abordée, en particulier dans le cas où les processus ont accès à des objets puissants. Pour finir, la thèse traite le sujet des tâches colorées en fournissant un algorithme de renommage adapté au cas où la concurrence est réduite. Une nouvelle classe de tâches colorées est enfin introduite qui englobe, sous un formalisme unique, plusieurs problèmes considérés jusqu'ici comme indépendants.
|
670 |
The effects of detailed analysis on the prediction of seismic building pounding performanceCole, Gregory Lloyd January 2012 (has links)
Building pounding is a recognised phenomenon where adjacent buildings collide under lateral loading due to insufficient provision of building separation. The consequences of this interaction are known to be complex, and both buildings’ responses can be significantly affected. In the absence of extensive experimental data, numerical modelling has been frequently adopted as a means of evaluating building pounding risk during earthquakes. In performing numerical analysis, it becomes necessary to create specialised ‘contact’ elements to simulate building contact. While many contact elements have been previously proposed, detailed consideration of their inherent assumptions has frequently been overlooked. This thesis considers the significance and consequences of using the Kelvin contact element for a variety of pounding situations and with varying levels of model detail.
Pounding between two adjacent floors (floor/floor collision) is considered as a one dimensional wave propagation problem. By modelling each floor as a flexible rod (termed distributed mass modelling), theoretical relationships for collision force, collision duration and post-collision velocity are derived. This theory is then compared to the predictions made when using the traditionally adopted assumptions of fully rigid colliding floors (termed lumped mass modelling). The post-collision velocities obtained from each method are found to agree only when the axial period of both floors is identical. Relationships between lumped mass and distributed mass models are formed, and an ‘equivalent lumped mass’ method is developed where distributed mass effects can be emulated without explicit modelling of floor flexibility.
The theoretical solution method is then adapted for use in Non-Linear Time History Analysis (NLTHA) software to model specific pounding situations. Numerical modelling of a single collision is performed to compare these results to the theoretical predictions. Good agreement is found, and the model’s complexity is simplified until a sufficiently accurate simulation is performed without overly onerous computational requirements. Five methods are detailed that incorporate energy loss during collision into the distributed mass models and a calibration method is developed that enables researchers to define the level of energy loss that occurs during a single collision.
Using the developed modelling methods, the pounding response of two existing Wellington buildings is predicted. This is first performed using 2D analysis of the stiffest frame from each building. The predicted building pounding damage is categorised into local damage (damageresulting from the magnitude of the force applied during contact) and global damage (damage due to the change in dynamic building properties resulting from momentum transfer during collision). Local and global damage effects are found to be fundamentally different consequences of collision, with the two categories responding differently to changes in the modelled system. The effects of building separation, scaling of input motion, modelling of soil-structure-interaction, collision damping, and floor rigidity are investigated for the considered system.
3D analysis of the building configuration is then investigated. Additional complications arising from the transverse movement of buildings prior to and during collision are identified and refined modelling methods are developed. The 3D configuration of these buildings causes torsional interaction, despite both buildings being perfectly symmetrical. This torsion is due to the eccentric positioning of the buildings relative to each other, which causes an eccentric contact load when pounding occurs. The 3D models are used to test the effects of building separation, 2D vs. 3D modelling, collision damping, floor rigidity, and the significance of the torsional interactions.
Attention is then focused on collisions between a building’s floors and an adjacent building’s columns (floor/column collision). Due to the high frequency content of pounding impacts, the significance of using Timoshenko beam theory instead of Euler-Bernoulli theory is assessed. The shear stiffness in the Timoshenko formulation is found to significantly affect the columns’ predicted performance, and is used in subsequent modelling. An appropriately accurate method of modelling that minimises computational effort is then developed. The simplified model is used to predict the performance of two three-storey buildings that experience floor/column collision. The effects of floor/column impact are predicted for collisions at mid-height, and near the support of the impacted column. Each of these scenarios investigates the effect of building separation on local damage and global damage.
Finally, a method to model collision between two adjacent walls that collide out-of-plane is developed (wall/wall contact). The adopted contact element properties are selected using analogous situations that have been previously investigated. The method is used to investigate a single collision between two different wall configurations. In the conclusions, the developed modelling methods from all the considered collision configurations are collected and presented in a summary table. It is intended that these recommendations will assist other researchers in selecting appropriate building pounding modelling properties.
|
Page generated in 0.0839 seconds