• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2
  • 1
  • Tagged with
  • 8
  • 8
  • 6
  • 6
  • 6
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Ein diensteorientiertes Abrechnungssystem für dynamische virtuelle Organisationen

Göhner, Matthias January 2009 (has links)
Zugl.: München, Univ. der Bundeswehr, Diss., 2009
2

A framework for evolving grid computing systems

Alfawair, Mai January 2009 (has links)
Grid computing was born in the 1990s, when researchers were looking for a way to share expensive computing resources and experiment equipment. Grid computing is becoming increasingly popular because it promotes the sharing of distributed resources that may be heterogeneous in nature, and it enables scientists and engineering professionals to solve large scale computing problems. In reality, there are already huge numbers of grid computing facilities distributed around the world, each one having been created to serve a particular group of scientists such as weather forecasters, or a group of users such as stock markets. However, the need to extend the functionalities of current grid systems lends itself to the consideration of grid evolution. This allows the combination of many disjunct grids into a single powerful grid that can operate as one vast computational resource, as well as for grid environments to be flexible, to be able to change and to evolve. The rationale for grid evolution is the current rapid and increasing advances in both software and hardware. Evolution means adding or removing capabilities. This research defines grid evolution as adding new functions and/or equipment and removing unusable resources that affect the performance of some nodes. This thesis produces a new technique for grid evolution, allowing it to be seamless and to operate at run time. Within grid computing, evolution is an integration of software and hardware and can be of two distinct types, external and internal. Internal evolution occurs inside the grid boundary by migrating special resources such as application software from node to node inside the grid. While external evolution occurs between grids. This thesis develops a framework for grid evolution that insulates users from the complexities of grids. This framework has at its core a resource broker together with a grid monitor to cope with internal and external evolution, advance reservation, fault tolerance, the monitoring of the grid environment, increased resource utilisation and the high availability of grid resources. The starting point for the present framework of grid evolution is when the grid receives a job whose requirements do not exist on the required node which triggers grid evolution. If the grid has all the requirements scattered across its nodes, internal evolution enabling the grid to migrate the required resources to the required node in order to satisfy job requirements ensues, but if the grid does not have these resources, external evolution enables the grid either to collect them from other grids (permanent evolution) or to send the job to other grids for execution (just in time) evolution. Finally a simulation tool called (EVOSim) has been designed, developed and tested. It is written in Oracle 10g and has been used for the creation of four grids, each of which has a different setup including different nodes, application software, data and polices. Experiments were done by submitting jobs to the grid at run time, and then comparing the results and analysing the performance of those grids that use the approach of evolution with those that do not. The results of these experiments have demonstrated that these features significantly improve the performance of grid environments and provide excellent scheduling results, with a decreasing number of rejected jobs.
3

Um ambiente de monitoramento de recursos e escalonamento cooperativo de aplicações paralelas em grades computacionais. / A resource monitoring and parallel application cooperative scheduling environment on computing grids.

Paula, Nilton Cézar de 23 January 2009 (has links)
Grade computacional é uma alternativa para melhorar o desempenho de aplicações paralelas, por permitir o uso simultâneo de vários recursos distribuídos. Entretanto, para que a utilização de uma grade seja adequada, é necessário que os recursos sejam utilizados de maneira a permitir a otimização de algum critério. Para isto, várias estratégias de escalonamento têm sido propostas, mas o grande desafio é extrair o potencial que os recursos oferecem para a execução de aplicações paralelas. Uma estratégia bastante usada em sistemas de escalonamento atuais é escalonar uma aplicação paralela nos recursos de um único cluster. Contudo, apesar da estratégia ser simples, ela é muito limitada, devido principalmente a baixa utilização dos recursos. Este trabalho propõe e implementa o sistema GCSE (Grid Cooperative Scheduling Environment) que provê uma estratégia de escalonamento cooperativo para usar eficientemente os recursos distribuídos. Os processos de uma aplicação paralela podem ser distribuídos em recursos de vários clusters e computadores, todos conectados a redes de comunicação públicas. GCSE também gerencia a execução das aplicações, bem como oferece um conjunto de primitivas que fornece informações sobre os ambientes de execução para o suporte à comunicação entre processos. Além disto, uma estratégia de antecipação de dados é proposta para aumentar ainda mais o desempenho das aplicações. Para realizar um bom escalonamento é preciso descobrir os recursos distribuídos. Neste sentido, o sistema LIMA (Light-weIght Monitoring Architecture) foi projetado e implementado. Este sistema provê um conjunto de estratégias e mecanismos para o armazenamento distribuído e acesso eficiente às informações sobre os recursos distribuídos. Além disto, LIMA adiciona facilidades de descobrimento e integração com o GCSE e outros sistemas. Por fim, serão apresentados os testes e avaliações dos resultados com o uso integrado dos sistemas GCSE e LIMA, compondo um ambiente robusto para a execução de aplicações paralelas. / Computing grid is an alternative for improving the parallel application performance, because it allows the simultaneous use of many distributed resources. However, in order to take advantage of a grid, the resources must be used in such a way that some criteria can be optimized. Thus, various scheduling strategies have been proposed, but the great challenge is the exploitation of the potential that the resources provide to the parallel application execution. A strategy often used in current scheduling systems is to schedule a parallel application on resources of a single cluster. Even though this strategy is simple, it is very limited, mainly due to low resource utilization. This thesis proposes and implements the GCSE system (Grid Cooperative Scheduling Environment) that provides a cooperative scheduling strategy for efficiently using the distributed resources. The processes of a parallel application can be distributed in resources of many clusters and computers, and they are all connected by public communication networks. GCSE also manages the application execution, as well as offering a primitive set that provide information about the execution environments for ensuring the communication between processes. Moreover, a data advancement strategy is proposed for improving the application performance. In order to perform a good scheduling, the distributed resources must be discovered. Therefore, the LIMA system (Light-weIght Monitoring Architecture) was designed and implemented. This system provides both strategy and mechanism set for distributed storage and efficient access to information about the distributed resources. In addition, LIMA offers facilities for resource discovering and integrating its functionalities both GCSE and other systems. Finally, the tests and result evaluations are presented with the integrated use of both GCSE and LIMA systems, composing a robust environment for executing parallel application.
4

Um ambiente de monitoramento de recursos e escalonamento cooperativo de aplicações paralelas em grades computacionais. / A resource monitoring and parallel application cooperative scheduling environment on computing grids.

Nilton Cézar de Paula 23 January 2009 (has links)
Grade computacional é uma alternativa para melhorar o desempenho de aplicações paralelas, por permitir o uso simultâneo de vários recursos distribuídos. Entretanto, para que a utilização de uma grade seja adequada, é necessário que os recursos sejam utilizados de maneira a permitir a otimização de algum critério. Para isto, várias estratégias de escalonamento têm sido propostas, mas o grande desafio é extrair o potencial que os recursos oferecem para a execução de aplicações paralelas. Uma estratégia bastante usada em sistemas de escalonamento atuais é escalonar uma aplicação paralela nos recursos de um único cluster. Contudo, apesar da estratégia ser simples, ela é muito limitada, devido principalmente a baixa utilização dos recursos. Este trabalho propõe e implementa o sistema GCSE (Grid Cooperative Scheduling Environment) que provê uma estratégia de escalonamento cooperativo para usar eficientemente os recursos distribuídos. Os processos de uma aplicação paralela podem ser distribuídos em recursos de vários clusters e computadores, todos conectados a redes de comunicação públicas. GCSE também gerencia a execução das aplicações, bem como oferece um conjunto de primitivas que fornece informações sobre os ambientes de execução para o suporte à comunicação entre processos. Além disto, uma estratégia de antecipação de dados é proposta para aumentar ainda mais o desempenho das aplicações. Para realizar um bom escalonamento é preciso descobrir os recursos distribuídos. Neste sentido, o sistema LIMA (Light-weIght Monitoring Architecture) foi projetado e implementado. Este sistema provê um conjunto de estratégias e mecanismos para o armazenamento distribuído e acesso eficiente às informações sobre os recursos distribuídos. Além disto, LIMA adiciona facilidades de descobrimento e integração com o GCSE e outros sistemas. Por fim, serão apresentados os testes e avaliações dos resultados com o uso integrado dos sistemas GCSE e LIMA, compondo um ambiente robusto para a execução de aplicações paralelas. / Computing grid is an alternative for improving the parallel application performance, because it allows the simultaneous use of many distributed resources. However, in order to take advantage of a grid, the resources must be used in such a way that some criteria can be optimized. Thus, various scheduling strategies have been proposed, but the great challenge is the exploitation of the potential that the resources provide to the parallel application execution. A strategy often used in current scheduling systems is to schedule a parallel application on resources of a single cluster. Even though this strategy is simple, it is very limited, mainly due to low resource utilization. This thesis proposes and implements the GCSE system (Grid Cooperative Scheduling Environment) that provides a cooperative scheduling strategy for efficiently using the distributed resources. The processes of a parallel application can be distributed in resources of many clusters and computers, and they are all connected by public communication networks. GCSE also manages the application execution, as well as offering a primitive set that provide information about the execution environments for ensuring the communication between processes. Moreover, a data advancement strategy is proposed for improving the application performance. In order to perform a good scheduling, the distributed resources must be discovered. Therefore, the LIMA system (Light-weIght Monitoring Architecture) was designed and implemented. This system provides both strategy and mechanism set for distributed storage and efficient access to information about the distributed resources. In addition, LIMA offers facilities for resource discovering and integrating its functionalities both GCSE and other systems. Finally, the tests and result evaluations are presented with the integrated use of both GCSE and LIMA systems, composing a robust environment for executing parallel application.
5

EFFICIENT GRID COMPUTING BASED ALGORITHMS FOR POWER SYSTEM DATA ANALYSIS

Mohsin Ali Unknown Date (has links)
The role of electric power systems has grown steadily in both scope and importance over time making electricity increasingly recognized as a key to social and economic progress in many developing countries. In a sense, reliable power systems constitute the foundation of all prospering societies. The constant expansion in electric power systems, along with increased energy demand, requires that power systems become more and more complex. Such complexity results in much uncertainty which demands comprehensive reliability and security assessment to ensure reliable energy supply. Power industries in many countries are facing these challenges and are trying to increase the computational capability to handle the ever-increasing data and analytical needs of operations and planning. Moreover, the deregulated electricity markets have been in operation in a number of countries since the 1990s. During the deregulation process, vertically integrated power utilities have been reformed into competitive markets, with initial goals to improve market efficiency, minimize production costs and reduce the electricity price. Given the benefits that have been achieved by deregulation, several new challenges are also observed in the market. Due to fundamental changes to the electric power industry, traditional management and analysis methods cannot deal with these new challenges. Deterministic reliability assessment criteria still exists but it doesn’t satisfy the probabilistic nature of power systems. In the deterministic approach the worst case analysis results in excess operating costs. On the other hand, probabilistic methods are now widely accepted. The analytical method uses a mathematical formula for reliability evaluation and generates results more quickly but it needs accurate and a lot of assumptions and is not suitable for large and complex systems. Simulation based techniques take care of much uncertainty and simulates the random behavior of the system. However, it requires much computing power, memory and other computing resources. Power engineers have to run thousands of times domain simulations to determine the stability for a set of credible disturbances before dispatching. For example, security analysis is associated with the steady state and dynamic response of the power system to various disturbances. It is highly desirable to have real time security assessment, especially in the market environment. Therefore, novel analysis methods are required for power systems reliability and security in the deregulated environment, which can provide comprehensive results, and high performance computing (HPC) power in order to carry out such analysis within a limited time. Further, with the deregulation in power industry, operation control has been distributed among many organizations. The power grid is a complex network involving a range of energy resources including nuclear, fossil and renewable energy resources with many operational levels and layers including control centers, power plants and transmission and distribution systems. The energy resources are managed by different organizations in the electricity market and all these participants (including producers, consumers and operators) can affect the operational state of the power grid at any time. Moreover, adequacy analysis is an important task in power system planning and can be regarded as collaborative tasks, which demands the collaboration among the electricity market participants for reliable energy supply. Grid computing is gaining attention from power engineering experts as an ideal solution to the computational difficulties being faced by the power industry. Grid computing infrastructure involves the integrated and collaborative use of computers, networks, databases and scientific instruments owned and managed by multiple organizations. Grid computing technology offers potentially feasible support to the design and development of grid computing based infrastructure for power system reliability and security analysis. It can help in building infrastructure, which can provide a high performance computing and collaborative environment, and offer an optimal solution between cast and efficiency. While power system analysis is a vast topic, only a limited amount of research has been initiated in several places to investigate the applications of grid computing in power systems. This thesis will investigate probabilistic based reliability and security analysis of complex power systems in order to develop new techniques for providing comprehensive result with enormous efficiency. A review of existing techniques was conducted to determine the computational needs in the area of power systems. The main objective of this research is to propose and develop a general framework of computing grid and special grid services for probabilistic power system reliability and security assessment in the electricity market. As a result of this research, grid computing based techniques are proposed for power systems probabilistic load flow analysis, probabilistic small signal analysis, probabilistic transient stability analysis, and probabilistic contingencies analysis. Moreover, a grid computing based system is designed and developed for the monitoring and control of distributed generation systems. As a part of this research, a detailed review is presented about the possible applications of this technology in other aspects of power systems. It is proposed that these grid based techniques will provide comprehensive results that will lead to great efficiency, and ultimately enhance the existing computing capabilities of power companies in a cost-effective manner. At a part of this research, a small scale computing grid is developed which will consist of grid services for probabilistic reliability and security assessment techniques. A significant outcome of this research will be the improved performance, accuracy, and security of data sharing and collaboration. More importantly grid based computing will improve the capability of power system analysis in a deregulated environment where complex and large amounts of data would otherwise be impossible to analyze without huge investments in computing facilities.
6

EFFICIENT GRID COMPUTING BASED ALGORITHMS FOR POWER SYSTEM DATA ANALYSIS

Mohsin Ali Unknown Date (has links)
The role of electric power systems has grown steadily in both scope and importance over time making electricity increasingly recognized as a key to social and economic progress in many developing countries. In a sense, reliable power systems constitute the foundation of all prospering societies. The constant expansion in electric power systems, along with increased energy demand, requires that power systems become more and more complex. Such complexity results in much uncertainty which demands comprehensive reliability and security assessment to ensure reliable energy supply. Power industries in many countries are facing these challenges and are trying to increase the computational capability to handle the ever-increasing data and analytical needs of operations and planning. Moreover, the deregulated electricity markets have been in operation in a number of countries since the 1990s. During the deregulation process, vertically integrated power utilities have been reformed into competitive markets, with initial goals to improve market efficiency, minimize production costs and reduce the electricity price. Given the benefits that have been achieved by deregulation, several new challenges are also observed in the market. Due to fundamental changes to the electric power industry, traditional management and analysis methods cannot deal with these new challenges. Deterministic reliability assessment criteria still exists but it doesn’t satisfy the probabilistic nature of power systems. In the deterministic approach the worst case analysis results in excess operating costs. On the other hand, probabilistic methods are now widely accepted. The analytical method uses a mathematical formula for reliability evaluation and generates results more quickly but it needs accurate and a lot of assumptions and is not suitable for large and complex systems. Simulation based techniques take care of much uncertainty and simulates the random behavior of the system. However, it requires much computing power, memory and other computing resources. Power engineers have to run thousands of times domain simulations to determine the stability for a set of credible disturbances before dispatching. For example, security analysis is associated with the steady state and dynamic response of the power system to various disturbances. It is highly desirable to have real time security assessment, especially in the market environment. Therefore, novel analysis methods are required for power systems reliability and security in the deregulated environment, which can provide comprehensive results, and high performance computing (HPC) power in order to carry out such analysis within a limited time. Further, with the deregulation in power industry, operation control has been distributed among many organizations. The power grid is a complex network involving a range of energy resources including nuclear, fossil and renewable energy resources with many operational levels and layers including control centers, power plants and transmission and distribution systems. The energy resources are managed by different organizations in the electricity market and all these participants (including producers, consumers and operators) can affect the operational state of the power grid at any time. Moreover, adequacy analysis is an important task in power system planning and can be regarded as collaborative tasks, which demands the collaboration among the electricity market participants for reliable energy supply. Grid computing is gaining attention from power engineering experts as an ideal solution to the computational difficulties being faced by the power industry. Grid computing infrastructure involves the integrated and collaborative use of computers, networks, databases and scientific instruments owned and managed by multiple organizations. Grid computing technology offers potentially feasible support to the design and development of grid computing based infrastructure for power system reliability and security analysis. It can help in building infrastructure, which can provide a high performance computing and collaborative environment, and offer an optimal solution between cast and efficiency. While power system analysis is a vast topic, only a limited amount of research has been initiated in several places to investigate the applications of grid computing in power systems. This thesis will investigate probabilistic based reliability and security analysis of complex power systems in order to develop new techniques for providing comprehensive result with enormous efficiency. A review of existing techniques was conducted to determine the computational needs in the area of power systems. The main objective of this research is to propose and develop a general framework of computing grid and special grid services for probabilistic power system reliability and security assessment in the electricity market. As a result of this research, grid computing based techniques are proposed for power systems probabilistic load flow analysis, probabilistic small signal analysis, probabilistic transient stability analysis, and probabilistic contingencies analysis. Moreover, a grid computing based system is designed and developed for the monitoring and control of distributed generation systems. As a part of this research, a detailed review is presented about the possible applications of this technology in other aspects of power systems. It is proposed that these grid based techniques will provide comprehensive results that will lead to great efficiency, and ultimately enhance the existing computing capabilities of power companies in a cost-effective manner. At a part of this research, a small scale computing grid is developed which will consist of grid services for probabilistic reliability and security assessment techniques. A significant outcome of this research will be the improved performance, accuracy, and security of data sharing and collaboration. More importantly grid based computing will improve the capability of power system analysis in a deregulated environment where complex and large amounts of data would otherwise be impossible to analyze without huge investments in computing facilities.
7

EFFICIENT GRID COMPUTING BASED ALGORITHMS FOR POWER SYSTEM DATA ANALYSIS

Mohsin Ali Unknown Date (has links)
The role of electric power systems has grown steadily in both scope and importance over time making electricity increasingly recognized as a key to social and economic progress in many developing countries. In a sense, reliable power systems constitute the foundation of all prospering societies. The constant expansion in electric power systems, along with increased energy demand, requires that power systems become more and more complex. Such complexity results in much uncertainty which demands comprehensive reliability and security assessment to ensure reliable energy supply. Power industries in many countries are facing these challenges and are trying to increase the computational capability to handle the ever-increasing data and analytical needs of operations and planning. Moreover, the deregulated electricity markets have been in operation in a number of countries since the 1990s. During the deregulation process, vertically integrated power utilities have been reformed into competitive markets, with initial goals to improve market efficiency, minimize production costs and reduce the electricity price. Given the benefits that have been achieved by deregulation, several new challenges are also observed in the market. Due to fundamental changes to the electric power industry, traditional management and analysis methods cannot deal with these new challenges. Deterministic reliability assessment criteria still exists but it doesn’t satisfy the probabilistic nature of power systems. In the deterministic approach the worst case analysis results in excess operating costs. On the other hand, probabilistic methods are now widely accepted. The analytical method uses a mathematical formula for reliability evaluation and generates results more quickly but it needs accurate and a lot of assumptions and is not suitable for large and complex systems. Simulation based techniques take care of much uncertainty and simulates the random behavior of the system. However, it requires much computing power, memory and other computing resources. Power engineers have to run thousands of times domain simulations to determine the stability for a set of credible disturbances before dispatching. For example, security analysis is associated with the steady state and dynamic response of the power system to various disturbances. It is highly desirable to have real time security assessment, especially in the market environment. Therefore, novel analysis methods are required for power systems reliability and security in the deregulated environment, which can provide comprehensive results, and high performance computing (HPC) power in order to carry out such analysis within a limited time. Further, with the deregulation in power industry, operation control has been distributed among many organizations. The power grid is a complex network involving a range of energy resources including nuclear, fossil and renewable energy resources with many operational levels and layers including control centers, power plants and transmission and distribution systems. The energy resources are managed by different organizations in the electricity market and all these participants (including producers, consumers and operators) can affect the operational state of the power grid at any time. Moreover, adequacy analysis is an important task in power system planning and can be regarded as collaborative tasks, which demands the collaboration among the electricity market participants for reliable energy supply. Grid computing is gaining attention from power engineering experts as an ideal solution to the computational difficulties being faced by the power industry. Grid computing infrastructure involves the integrated and collaborative use of computers, networks, databases and scientific instruments owned and managed by multiple organizations. Grid computing technology offers potentially feasible support to the design and development of grid computing based infrastructure for power system reliability and security analysis. It can help in building infrastructure, which can provide a high performance computing and collaborative environment, and offer an optimal solution between cast and efficiency. While power system analysis is a vast topic, only a limited amount of research has been initiated in several places to investigate the applications of grid computing in power systems. This thesis will investigate probabilistic based reliability and security analysis of complex power systems in order to develop new techniques for providing comprehensive result with enormous efficiency. A review of existing techniques was conducted to determine the computational needs in the area of power systems. The main objective of this research is to propose and develop a general framework of computing grid and special grid services for probabilistic power system reliability and security assessment in the electricity market. As a result of this research, grid computing based techniques are proposed for power systems probabilistic load flow analysis, probabilistic small signal analysis, probabilistic transient stability analysis, and probabilistic contingencies analysis. Moreover, a grid computing based system is designed and developed for the monitoring and control of distributed generation systems. As a part of this research, a detailed review is presented about the possible applications of this technology in other aspects of power systems. It is proposed that these grid based techniques will provide comprehensive results that will lead to great efficiency, and ultimately enhance the existing computing capabilities of power companies in a cost-effective manner. At a part of this research, a small scale computing grid is developed which will consist of grid services for probabilistic reliability and security assessment techniques. A significant outcome of this research will be the improved performance, accuracy, and security of data sharing and collaboration. More importantly grid based computing will improve the capability of power system analysis in a deregulated environment where complex and large amounts of data would otherwise be impossible to analyze without huge investments in computing facilities.
8

EFFICIENT GRID COMPUTING BASED ALGORITHMS FOR POWER SYSTEM DATA ANALYSIS

Mohsin Ali Unknown Date (has links)
The role of electric power systems has grown steadily in both scope and importance over time making electricity increasingly recognized as a key to social and economic progress in many developing countries. In a sense, reliable power systems constitute the foundation of all prospering societies. The constant expansion in electric power systems, along with increased energy demand, requires that power systems become more and more complex. Such complexity results in much uncertainty which demands comprehensive reliability and security assessment to ensure reliable energy supply. Power industries in many countries are facing these challenges and are trying to increase the computational capability to handle the ever-increasing data and analytical needs of operations and planning. Moreover, the deregulated electricity markets have been in operation in a number of countries since the 1990s. During the deregulation process, vertically integrated power utilities have been reformed into competitive markets, with initial goals to improve market efficiency, minimize production costs and reduce the electricity price. Given the benefits that have been achieved by deregulation, several new challenges are also observed in the market. Due to fundamental changes to the electric power industry, traditional management and analysis methods cannot deal with these new challenges. Deterministic reliability assessment criteria still exists but it doesn’t satisfy the probabilistic nature of power systems. In the deterministic approach the worst case analysis results in excess operating costs. On the other hand, probabilistic methods are now widely accepted. The analytical method uses a mathematical formula for reliability evaluation and generates results more quickly but it needs accurate and a lot of assumptions and is not suitable for large and complex systems. Simulation based techniques take care of much uncertainty and simulates the random behavior of the system. However, it requires much computing power, memory and other computing resources. Power engineers have to run thousands of times domain simulations to determine the stability for a set of credible disturbances before dispatching. For example, security analysis is associated with the steady state and dynamic response of the power system to various disturbances. It is highly desirable to have real time security assessment, especially in the market environment. Therefore, novel analysis methods are required for power systems reliability and security in the deregulated environment, which can provide comprehensive results, and high performance computing (HPC) power in order to carry out such analysis within a limited time. Further, with the deregulation in power industry, operation control has been distributed among many organizations. The power grid is a complex network involving a range of energy resources including nuclear, fossil and renewable energy resources with many operational levels and layers including control centers, power plants and transmission and distribution systems. The energy resources are managed by different organizations in the electricity market and all these participants (including producers, consumers and operators) can affect the operational state of the power grid at any time. Moreover, adequacy analysis is an important task in power system planning and can be regarded as collaborative tasks, which demands the collaboration among the electricity market participants for reliable energy supply. Grid computing is gaining attention from power engineering experts as an ideal solution to the computational difficulties being faced by the power industry. Grid computing infrastructure involves the integrated and collaborative use of computers, networks, databases and scientific instruments owned and managed by multiple organizations. Grid computing technology offers potentially feasible support to the design and development of grid computing based infrastructure for power system reliability and security analysis. It can help in building infrastructure, which can provide a high performance computing and collaborative environment, and offer an optimal solution between cast and efficiency. While power system analysis is a vast topic, only a limited amount of research has been initiated in several places to investigate the applications of grid computing in power systems. This thesis will investigate probabilistic based reliability and security analysis of complex power systems in order to develop new techniques for providing comprehensive result with enormous efficiency. A review of existing techniques was conducted to determine the computational needs in the area of power systems. The main objective of this research is to propose and develop a general framework of computing grid and special grid services for probabilistic power system reliability and security assessment in the electricity market. As a result of this research, grid computing based techniques are proposed for power systems probabilistic load flow analysis, probabilistic small signal analysis, probabilistic transient stability analysis, and probabilistic contingencies analysis. Moreover, a grid computing based system is designed and developed for the monitoring and control of distributed generation systems. As a part of this research, a detailed review is presented about the possible applications of this technology in other aspects of power systems. It is proposed that these grid based techniques will provide comprehensive results that will lead to great efficiency, and ultimately enhance the existing computing capabilities of power companies in a cost-effective manner. At a part of this research, a small scale computing grid is developed which will consist of grid services for probabilistic reliability and security assessment techniques. A significant outcome of this research will be the improved performance, accuracy, and security of data sharing and collaboration. More importantly grid based computing will improve the capability of power system analysis in a deregulated environment where complex and large amounts of data would otherwise be impossible to analyze without huge investments in computing facilities.

Page generated in 0.0604 seconds