• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 6
  • 4
  • 1
  • Tagged with
  • 23
  • 23
  • 23
  • 6
  • 6
  • 5
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Partitioned Persistent Homology

Malott, Nicholas O. January 2020 (has links)
No description available.
12

Parallel Computing of Particle Filtering Algorithms for Target Tracking Applications

Wu, Jiande 18 December 2014 (has links)
Particle filtering has been a very popular method to solve nonlinear/non-Gaussian state estimation problems for more than twenty years. Particle filters (PFs) have found lots of applications in areas that include nonlinear filtering of noisy signals and data, especially in target tracking. However, implementation of high dimensional PFs in real-time for large-scale problems is a very challenging computational task. Parallel & distributed (P&D) computing is a promising way to deal with the computational challenges of PF methods. The main goal of this dissertation is to develop, implement and evaluate computationally efficient PF algorithms for target tracking, and thereby bring them closer to practical applications. To reach this goal, a number of parallel PF algorithms is designed and implemented using different parallel hardware architectures such as Computer Cluster, Graphics Processing Unit (GPU), and Field-Programmable Gate Array (FPGA). Proposed is an improved PF implementation for computer cluster - the Particle Transfer Algorithm (PTA), which takes advantage of the cluster architecture and outperforms significantly existing algorithms. Also, a novel GPU PF algorithm implementation is designed which is highly efficient for GPU architectures. The proposed algorithm implementations on different parallel computing environments are applied and tested for target tracking problems, such as space object tracking, ground multitarget tracking using image sensor, UAV-multisensor tracking. Comprehensive performance evaluation and comparison of the algorithms for both tracking and computational capabilities is performed. It is demonstrated by the obtained simulation results that the proposed implementations help greatly overcome the computational issues of particle filtering for realistic practical problems.
13

High Performance Simulation of DEVS Based Large Scale Cellular Space Models

Sun, Yi 16 July 2009 (has links)
Cellular space modeling is becoming an increasingly important modeling paradigm for modeling complex systems with spatial-temporal behaviors. The growing demand for cellular space models has directed researchers to use different modeling formalisms, among which Discrete Event System Specification (DEVS) is widely used due to its formal modeling and simulation framework. The increasing complexity of systems to be modeled asks for cellular space models with large number of cells for modeling the systems¡¯ spatial-temporal behavior. Improving simulation performance becomes crucial for simulating large scale cellular space models. In this dissertation, we proposed a framework for improving simulation performance for large scale DEVS-based cellular space models. The framework has a layered structure, which includes modeling, simulation, and network layers corresponding to the DEVS-based modeling and simulation architecture. Based on this framework, we developed methods at each layer to overcome performance issues for simulating large scale cellular space models. Specifically, to increase the runtime and memory efficiency for simulating large number of cells, we applied Dynamic Structure DEVS (DSDEVS) to cellular space modeling and carried out comprehensive performance measurement. DSDEVS improves simulation performance by making the simulation focus only on those active models, and thus be more efficient than when the entire cellular space is loaded. To reduce the number of simulation cycles caused by extensive message passing among cells, we developed a pre-schedule modeling approach that exploits the model behavior for improving simulation performance. At the network layer, we developed a modified time-warp algorithm that supports parallel simulation of DEVS-based cellular space models. The developed methods have been applied to large scale wildfire spread simulations based on the DEVS-FIRE simulation environment and have achieved significant performance results.
14

Charge Transfer in Deoxyribonucleic Acid (DNA): Static Disorder, Dynamic Fluctuations and Complex Kinetic.

Edirisinghe Pathirannehelage, Neranjan S 07 January 2011 (has links)
The fact that loosely bonded DNA bases could tolerate large structural fluctuations, form a dissipative environment for a charge traveling through the DNA. Nonlinear stochastic nature of structural fluctuations facilitates rich charge dynamics in DNA. We study the complex charge dynamics by solving a nonlinear, stochastic, coupled system of differential equations. Charge transfer between donor and acceptor in DNA occurs via different mechanisms depending on the distance between donor and acceptor. It changes from tunneling regime to a polaron assisted hopping regime depending on the donor-acceptor separation. Also we found that charge transport strongly depends on the feasibility of polaron formation. Hence it has complex dependence on temperature and charge-vibrations coupling strength. Mismatched base pairs, such as different conformations of the G・A mispair, cause only minor structural changes in the host DNA molecule, thereby making mispair recognition an arduous task. Electron transport in DNA that depends strongly on the hopping transfer integrals between the nearest base pairs, which in turn are affected by the presence of a mispair, might be an attractive approach in this regard. I report here on our investigations, via the I –V characteristics, of the effect of a mispair on the electrical properties of homogeneous and generic DNA molecules. The I –V characteristics of DNA were studied numerically within the double-stranded tight-binding model. The parameters of the tight-binding model, such as the transfer integrals and on-site energies, are determined from first-principles calculations. The changes in electrical current through the DNA chain due to the presence of a mispair depend on the conformation of the G・A mispair and are appreciable for DNA consisting of up to 90 base pairs. For homogeneous DNA sequences the current through DNA is suppressed and the strongest suppression is realized for the G(anti)・A(syn) conformation of the G・A mispair. For inhomogeneous (generic) DNA molecules, the mispair result can be either suppression or an enhancement of the current, depending on the type of mispairs and actual DNA sequence.
15

Performance Analysis and Evaluation of Divisible Load Theory and Dynamic Loop Scheduling Algorithms in Parallel and Distributed Environments

Balasubramaniam, Mahadevan 14 August 2015 (has links)
High performance parallel and distributed computing systems are used to solve large, complex, and data parallel scientific applications that require enormous computational power. Data parallel workloads which require performing similar operations on different data objects, are present in a large number of scientific applications, such as N-body simulations and Monte Carlo simulations, and are expressed in the form of loops. Data parallel workloads that lack precedence constraints are called arbitrarily divisible workloads, and are amenable to easy parallelization. Load imbalance that arise from various sources such as application, algorithmic, and systemic characteristics during the execution of scientific applications degrades performance. Scheduling of arbitrarily divisible workloads to address load imbalance in order to obtain better utilization of computing resources is a major area of research. Divisible load theory (DLT) and dynamic loop scheduling (DLS) algorithms are two algorithmic approaches employed in the scheduling of arbitrarily divisible workloads. Despite sharing the same goal of achieving load balancing, the two approaches are fundamentally different. Divisible load theory algorithms are linear, deterministic and platform dependent, whereas dynamic loop scheduling algorithms are probabilistic and platform agnostic. Divisible load theory algorithms have been traditionally used for performance prediction in environments characterized by known or expected variation in the system characteristics at runtime. Dynamic loop scheduling algorithms are designed to simultaneously address all the sources of load imbalance that stochastically arise at runtime from application, algorithmic, and systemic characteristics. In this dissertation, an analysis and performance evaluation of DLT and DLS algorithms are presented in the form of a scalability study and a robustness investigation. The effect of network topology on their performance is studied. A hybrid scheduling approach is also proposed that integrates DLT and DLS algorithms. The hybrid approach combines the strength of DLT and DLS algorithms and improves the performance of the scientific applications running in large scale parallel and distributed computing environments, and delivers performance superior to that which can be obtained by applying DLT algorithms in isolation. The range of conditions for which the hybrid approach is useful is also identified and discussed.
16

Diffraction électromagnétique par des réseaux et des surfaces rugueuses aléatoires : mise en œuvre deméthodes hautement efficaces pour la résolution de systèmes aux valeurs propres et de problèmesaux conditions initiales / Electromagnetic scattering by gratings and random rough surfaces : implementation of high performance algorithms for solving eigenvalue problems and problems with initial conditions

Pan, Cihui 02 December 2015 (has links)
Dans cette thèse, nous étudions la diffraction électromagnétique par des réseau et surfaces rugueuse aléatoire. Le méthode C est une méthode exacte développée pour ce but. Il est basé sur équations de Maxwell sous forme covariante écrite dans un système de coordonnées non orthogonal. Le méthode C conduisent à résoudre le problème de valeur propre. Le champ diffusé est expansé comme une combinaison linéaire des solutions propres satisfaisant à la condition d’onde sortant.Nous nous concentrons sur l’aspect numérique de la méthode C, en essayant de développer une application efficace de cette méthode exacte. Pour les réseaux, nous proposons une nouvelle version de la méthode C qui conduit `a un système différentiel avec les conditions initiales. Nous montrons que cette nouvelle version de la méthode C peut être utilisée pour étudier les réseaux de multicouches avec un médium homogène.Nous vous proposons un algorithme QR parallèle conçu spécifiquement pour la méthode C pour résoudre le problème de valeurs propres. Cet algorithme QR parallèle est une variante de l’algorithme QR sur la base de trois tech- niques: “décalage rapide”, poursuite de renflement parallèle et de dégonflage parallèle agressif précoce (AED). / We study the electromagnetic diffraction by gratings and random rough surfaces. The C-method is an exact method developed for this aim. It is based on Maxwell’s equations under covariant form written in a nonorthogonal coordinate system. The C-method leads to an eigenvalue problem, the solution of which gives the diffracted field.We focus on the numerical aspect of the C-method, trying to develop an efficient application of this exact method. For gratings, we have developed a new version of C-method which leads to a differential system with initial conditions. This new version of C-method can be used to study multilayer gratings with homogeneous medium.We implemented high performance algorithms to the original versions of C-method. Especially, we have developed a specifically designed parallel QR algorithm for the C- method and spectral projection method to solve the eigenvalue problem more efficiently. Experiments have shown that the computation time can be reduced significantly.
17

Responsive Execution of Parallel Programs in Distributed Computing Environments

Karl, Holger 03 December 1999 (has links)
Vernetzte Standardarbeitsplatzrechner (sog. Cluster) sind eine attraktive Umgebung zur Ausf"uhrung paralleler Programme; f"ur einige Anwendungsgebiete bestehen jedoch noch immer ungel"oste Probleme. Ein solches Problem ist die Verl"asslichkeit und Rechtzeitigkeit der Programmausf"uhrung: In vielen Anwendungen ist es wichtig, sich auf die rechtzeitige Fertigstellung eines Programms verlassen zu k"onnen. Mechanismen zur Kombination dieser Eigenschaften f"ur parallele Programme in verteilten Rechenumgebungen sind das Hauptanliegen dieser Arbeit. Zur Behandlung dieses Anliegens ist eine gemeinsame Metrik f"ur Verl"asslichkeit und Rechtzeitigkeit notwendig. Eine solche Metrik ist die Responsivit"at, die f"ur die Bed"urfnisse dieser Arbeit verfeinert wird. Als Fallstudie werden Calypso und Charlotte, zwei Systeme zur parallelen Programmierung, im Hinblick auf Responsivit"at untersucht und auf mehreren Abstraktionsebenen werden Ansatzpunkte zur Verbesserung ihrer Responsivit"at identifiziert. L"osungen f"ur diese Ansatzpunkte werden zu allgemeineren Mechanismen f"ur (parallele) responsive Dienste erweitert. Im Einzelnen handelt es sich um 1. eine Analyse der Responsivit"at von Calypsos ``eager scheduling'' (ein Verfahren zur Lastbalancierung und Fehlermaskierung), 2. die Behebung eines ``single point of failure,'' zum einen durch eine Responsivit"atsanalyse von Checkpointing, zum anderen durch ein auf Standardschnittstellen basierendes System zur Replikation bestehender Software, 3. ein Verfahren zur garantierten Ressourcenzuteilung f"ur parallele Programme und 4.die Einbeziehung semantischer Information "uber das Kommunikationsmuster eines Programms in dessen Ausf"uhrung zur Verbesserung der Leistungsf"ahigkeit. Die vorgeschlagenen Mechanismen sind kombinierbar und f"ur den Einsatz in Standardsystemen geeignet. Analyse und Experimente zeigen, dass diese Mechanismen die Responsivit"at passender Anwendungen verbessern. / Clusters of standard workstations have been shown to be an attractive environment for parallel computing. However, there remain unsolved problems to make them suitable to some application scenarios. One of these problems is a dependable and timely program execution: There are many applications in which a program should be successfully completed at a predictable point of time. Mechanisms to combine the properties of both dependable and timely execution of parallel programs in distributed computing environments are the main objective of this dissertation. Addressing these properties requires a joint metric for dependability and timeliness. Responsiveness is such a metric; it is refined for the purposes of this work. As a case study, Calypso and Charlotte, two parallel programming systems, are analyzed and their shortcomings on several abstraction levels with regard to responsiveness are identified. Solutions for them are presented and generalized, resulting in widely applicable mechanisms for (parallel) responsive services. Specifically, these solutions are: 1) a responsiveness analysis of Calypso's eager scheduling (a mechanism for load balancing and fault masking), 2) ameliorating a single point of failure by a responsiveness analysis of checkpointing and by a standard interface-based system for replication of legacy software, 3) managing resources in a way suitable for parallel programs, and 4) using semantical information about the communication pattern of a program to improve its performance. All proposed mechanisms can be combined and are suitable for use in standard environments. It is shown by analysis and experiments that these mechanisms improve the responsiveness of eligible applications.
18

Exploration of parallel graph-processing algorithms on distributed architectures / Exploration d’algorithmes de traitement parallèle de graphes sur architectures distribuées

Collet, Julien 06 December 2017 (has links)
Avec l'explosion du volume de données produites chaque année, les applications du domaine du traitement de graphes ont de plus en plus besoin d'être parallélisées et déployées sur des architectures distribuées afin d'adresser le besoin en mémoire et en ressource de calcul. Si de telles architectures larges échelles existent, issue notamment du domaine du calcul haute performance (HPC), la complexité de programmation et de déploiement d’algorithmes de traitement de graphes sur de telles cibles est souvent un frein à leur utilisation. De plus, la difficile compréhension, a priori, du comportement en performances de ce type d'applications complexifie également l'évaluation du niveau d'adéquation des architectures matérielles avec de tels algorithmes. Dans ce contexte, ces travaux de thèses portent sur l’exploration d’algorithmes de traitement de graphes sur architectures distribuées en utilisant GraphLab, un Framework de l’état de l’art dédié à la programmation parallèle de tels algorithmes. En particulier, deux cas d'applications réelles ont été étudiées en détails et déployées sur différentes architectures à mémoire distribuée, l’un venant de l’analyse de trace d’exécution et l’autre du domaine du traitement de données génomiques. Ces études ont permis de mettre en évidence l’existence de régimes de fonctionnement permettant d'identifier des points de fonctionnements pertinents dans lesquels on souhaitera placer un système pour maximiser son efficacité. Dans un deuxième temps, une étude a permis de comparer l'efficacité d'architectures généralistes (type commodity cluster) et d'architectures plus spécialisées (type serveur de calcul hautes performances) pour le traitement de graphes distribué. Cette étude a démontré que les architectures composées de grappes de machines de type workstation, moins onéreuses et plus simples, permettaient d'obtenir des performances plus élevées. Cet écart est d'avantage accentué quand les performances sont pondérées par les coûts d'achats et opérationnels. L'étude du comportement en performance de ces architectures a également permis de proposer in fine des règles de dimensionnement et de conception des architectures distribuées, dans ce contexte. En particulier, nous montrons comment l’étude des performances fait apparaitre les axes d’amélioration du matériel et comment il est possible de dimensionner un cluster pour traiter efficacement une instance donnée. Finalement, des propositions matérielles pour la conception de serveurs de calculs plus performants pour le traitement de graphes sont formulées. Premièrement, un mécanisme est proposé afin de tempérer la baisse significative de performance observée quand le cluster opère dans un point de fonctionnement où la mémoire vive est saturée. Enfin, les deux applications développées ont été évaluées sur une architecture à base de processeurs basse-consommation afin d'étudier la pertinence de telles architectures pour le traitement de graphes. Les performances mesurés en utilisant de telles plateformes sont encourageantes et montrent en particulier que la diminution des performances brutes par rapport aux architectures existantes est compensée par une efficacité énergétique bien supérieure. / With the advent of ever-increasing graph datasets in a large number of domains, parallel graph-processing applications deployed on distributed architectures are more and more needed to cope with the growing demand for memory and compute resources. Though large-scale distributed architectures are available, notably in the High-Performance Computing (HPC) domain, the programming and deployment complexity of such graphprocessing algorithms, whose parallelization and complexity are highly data-dependent, hamper usability. Moreover, the difficult evaluation of performance behaviors of these applications complexifies the assessment of the relevance of the used architecture. With this in mind, this thesis work deals with the exploration of graph-processing algorithms on distributed architectures, notably using GraphLab, a state of the art graphprocessing framework. Two use-cases are considered. For each, a parallel implementation is proposed and deployed on several distributed architectures of varying scales. This study highlights operating ranges, which can eventually be leveraged to appropriately select a relevant operating point with respect to the datasets processed and used cluster nodes. Further study enables a performance comparison of commodity cluster architectures and higher-end compute servers using the two use-cases previously developed. This study highlights the particular relevance of using clustered commodity workstations, which are considerably cheaper and simpler with respect to node architecture, over higher-end systems in this applicative context. Then, this thesis work explores how performance studies are helpful in cluster design for graph-processing. In particular, studying throughput performances of a graph-processing system gives fruitful insights for further node architecture improvements. Moreover, this work shows that a more in-depth performance analysis can lead to guidelines for the appropriate sizing of a cluster for a given workload, paving the way toward resource allocation for graph-processing. Finally, hardware improvements for next generations of graph-processing servers areproposed and evaluated. A flash-based victim-swap mechanism is proposed for the mitigation of unwanted overloaded operations. Then, the relevance of ARM-based microservers for graph-processing is investigated with a port of GraphLab on a NVIDIA TX2-based architecture.
19

EFFICIENT GRID COMPUTING BASED ALGORITHMS FOR POWER SYSTEM DATA ANALYSIS

Mohsin Ali Unknown Date (has links)
The role of electric power systems has grown steadily in both scope and importance over time making electricity increasingly recognized as a key to social and economic progress in many developing countries. In a sense, reliable power systems constitute the foundation of all prospering societies. The constant expansion in electric power systems, along with increased energy demand, requires that power systems become more and more complex. Such complexity results in much uncertainty which demands comprehensive reliability and security assessment to ensure reliable energy supply. Power industries in many countries are facing these challenges and are trying to increase the computational capability to handle the ever-increasing data and analytical needs of operations and planning. Moreover, the deregulated electricity markets have been in operation in a number of countries since the 1990s. During the deregulation process, vertically integrated power utilities have been reformed into competitive markets, with initial goals to improve market efficiency, minimize production costs and reduce the electricity price. Given the benefits that have been achieved by deregulation, several new challenges are also observed in the market. Due to fundamental changes to the electric power industry, traditional management and analysis methods cannot deal with these new challenges. Deterministic reliability assessment criteria still exists but it doesn’t satisfy the probabilistic nature of power systems. In the deterministic approach the worst case analysis results in excess operating costs. On the other hand, probabilistic methods are now widely accepted. The analytical method uses a mathematical formula for reliability evaluation and generates results more quickly but it needs accurate and a lot of assumptions and is not suitable for large and complex systems. Simulation based techniques take care of much uncertainty and simulates the random behavior of the system. However, it requires much computing power, memory and other computing resources. Power engineers have to run thousands of times domain simulations to determine the stability for a set of credible disturbances before dispatching. For example, security analysis is associated with the steady state and dynamic response of the power system to various disturbances. It is highly desirable to have real time security assessment, especially in the market environment. Therefore, novel analysis methods are required for power systems reliability and security in the deregulated environment, which can provide comprehensive results, and high performance computing (HPC) power in order to carry out such analysis within a limited time. Further, with the deregulation in power industry, operation control has been distributed among many organizations. The power grid is a complex network involving a range of energy resources including nuclear, fossil and renewable energy resources with many operational levels and layers including control centers, power plants and transmission and distribution systems. The energy resources are managed by different organizations in the electricity market and all these participants (including producers, consumers and operators) can affect the operational state of the power grid at any time. Moreover, adequacy analysis is an important task in power system planning and can be regarded as collaborative tasks, which demands the collaboration among the electricity market participants for reliable energy supply. Grid computing is gaining attention from power engineering experts as an ideal solution to the computational difficulties being faced by the power industry. Grid computing infrastructure involves the integrated and collaborative use of computers, networks, databases and scientific instruments owned and managed by multiple organizations. Grid computing technology offers potentially feasible support to the design and development of grid computing based infrastructure for power system reliability and security analysis. It can help in building infrastructure, which can provide a high performance computing and collaborative environment, and offer an optimal solution between cast and efficiency. While power system analysis is a vast topic, only a limited amount of research has been initiated in several places to investigate the applications of grid computing in power systems. This thesis will investigate probabilistic based reliability and security analysis of complex power systems in order to develop new techniques for providing comprehensive result with enormous efficiency. A review of existing techniques was conducted to determine the computational needs in the area of power systems. The main objective of this research is to propose and develop a general framework of computing grid and special grid services for probabilistic power system reliability and security assessment in the electricity market. As a result of this research, grid computing based techniques are proposed for power systems probabilistic load flow analysis, probabilistic small signal analysis, probabilistic transient stability analysis, and probabilistic contingencies analysis. Moreover, a grid computing based system is designed and developed for the monitoring and control of distributed generation systems. As a part of this research, a detailed review is presented about the possible applications of this technology in other aspects of power systems. It is proposed that these grid based techniques will provide comprehensive results that will lead to great efficiency, and ultimately enhance the existing computing capabilities of power companies in a cost-effective manner. At a part of this research, a small scale computing grid is developed which will consist of grid services for probabilistic reliability and security assessment techniques. A significant outcome of this research will be the improved performance, accuracy, and security of data sharing and collaboration. More importantly grid based computing will improve the capability of power system analysis in a deregulated environment where complex and large amounts of data would otherwise be impossible to analyze without huge investments in computing facilities.
20

EFFICIENT GRID COMPUTING BASED ALGORITHMS FOR POWER SYSTEM DATA ANALYSIS

Mohsin Ali Unknown Date (has links)
The role of electric power systems has grown steadily in both scope and importance over time making electricity increasingly recognized as a key to social and economic progress in many developing countries. In a sense, reliable power systems constitute the foundation of all prospering societies. The constant expansion in electric power systems, along with increased energy demand, requires that power systems become more and more complex. Such complexity results in much uncertainty which demands comprehensive reliability and security assessment to ensure reliable energy supply. Power industries in many countries are facing these challenges and are trying to increase the computational capability to handle the ever-increasing data and analytical needs of operations and planning. Moreover, the deregulated electricity markets have been in operation in a number of countries since the 1990s. During the deregulation process, vertically integrated power utilities have been reformed into competitive markets, with initial goals to improve market efficiency, minimize production costs and reduce the electricity price. Given the benefits that have been achieved by deregulation, several new challenges are also observed in the market. Due to fundamental changes to the electric power industry, traditional management and analysis methods cannot deal with these new challenges. Deterministic reliability assessment criteria still exists but it doesn’t satisfy the probabilistic nature of power systems. In the deterministic approach the worst case analysis results in excess operating costs. On the other hand, probabilistic methods are now widely accepted. The analytical method uses a mathematical formula for reliability evaluation and generates results more quickly but it needs accurate and a lot of assumptions and is not suitable for large and complex systems. Simulation based techniques take care of much uncertainty and simulates the random behavior of the system. However, it requires much computing power, memory and other computing resources. Power engineers have to run thousands of times domain simulations to determine the stability for a set of credible disturbances before dispatching. For example, security analysis is associated with the steady state and dynamic response of the power system to various disturbances. It is highly desirable to have real time security assessment, especially in the market environment. Therefore, novel analysis methods are required for power systems reliability and security in the deregulated environment, which can provide comprehensive results, and high performance computing (HPC) power in order to carry out such analysis within a limited time. Further, with the deregulation in power industry, operation control has been distributed among many organizations. The power grid is a complex network involving a range of energy resources including nuclear, fossil and renewable energy resources with many operational levels and layers including control centers, power plants and transmission and distribution systems. The energy resources are managed by different organizations in the electricity market and all these participants (including producers, consumers and operators) can affect the operational state of the power grid at any time. Moreover, adequacy analysis is an important task in power system planning and can be regarded as collaborative tasks, which demands the collaboration among the electricity market participants for reliable energy supply. Grid computing is gaining attention from power engineering experts as an ideal solution to the computational difficulties being faced by the power industry. Grid computing infrastructure involves the integrated and collaborative use of computers, networks, databases and scientific instruments owned and managed by multiple organizations. Grid computing technology offers potentially feasible support to the design and development of grid computing based infrastructure for power system reliability and security analysis. It can help in building infrastructure, which can provide a high performance computing and collaborative environment, and offer an optimal solution between cast and efficiency. While power system analysis is a vast topic, only a limited amount of research has been initiated in several places to investigate the applications of grid computing in power systems. This thesis will investigate probabilistic based reliability and security analysis of complex power systems in order to develop new techniques for providing comprehensive result with enormous efficiency. A review of existing techniques was conducted to determine the computational needs in the area of power systems. The main objective of this research is to propose and develop a general framework of computing grid and special grid services for probabilistic power system reliability and security assessment in the electricity market. As a result of this research, grid computing based techniques are proposed for power systems probabilistic load flow analysis, probabilistic small signal analysis, probabilistic transient stability analysis, and probabilistic contingencies analysis. Moreover, a grid computing based system is designed and developed for the monitoring and control of distributed generation systems. As a part of this research, a detailed review is presented about the possible applications of this technology in other aspects of power systems. It is proposed that these grid based techniques will provide comprehensive results that will lead to great efficiency, and ultimately enhance the existing computing capabilities of power companies in a cost-effective manner. At a part of this research, a small scale computing grid is developed which will consist of grid services for probabilistic reliability and security assessment techniques. A significant outcome of this research will be the improved performance, accuracy, and security of data sharing and collaboration. More importantly grid based computing will improve the capability of power system analysis in a deregulated environment where complex and large amounts of data would otherwise be impossible to analyze without huge investments in computing facilities.

Page generated in 0.5148 seconds