• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 79
  • 50
  • 12
  • 8
  • 7
  • 3
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 195
  • 195
  • 65
  • 47
  • 41
  • 40
  • 37
  • 35
  • 34
  • 27
  • 26
  • 24
  • 23
  • 22
  • 21
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
91

Distribuovaný systém kryptoanalýzy / Distributed systems for cryptoanalysys

Zelinka, Miloslav Unknown Date (has links)
This work deals with crytpoanalysis, calculation performance and its distribution. It describes the methods of distributing the calculation performance for the needs of crypto analysis. Further it focuses on other methods allowing the speed increasing in breaking the cryptographic algorithms especially by means of the hash functions. The work explains the relatively new term of cloud computing and its consecutive use in cryptography. The examples of its practical utilisation follow. Also this work deals with possibility how to use grid computing for needs of cryptoanalysis. At last part of this work is system design using „cloud computing“ for breaking access password.
92

Diseño, Especificación, Validación y Aplicación de una Arquitectura modular de gestión de Redes Inalámbricas de Sensores

Pileggi ., Salvatore Flavio 15 April 2011 (has links)
Durante los últimos años las redes de sensores inalámbricas han sido objeto, como consecuencia de un creciente interés comercial, de una intensa actividad de investigación que ha determinado relevantes avances tanto en la tecnología base como en los aspectos de ingeniería a todos los niveles. Las redes de sensores inalámbricas se basan en el concepto de nodo sensor autónomo de bajo coste que proporciona recursos limitados en términos de cálculo y capacidad de almacenamiento de información, baja potencia de transmisión y sensorica avanzada. Se caracterizan por el tamaño extremadamente reducido y una ingeniería orientada a la eficiencia energética. A pesar de la disponibilidad de soluciones altamente avanzadas, caracterizadas por la eficiencia y la flexibilidad, la difusión comercial masiva se ha planteado más veces como hipótesis plausible y además parece tardar en concretarse de forma definitiva. Las principales causas están relacionadas, directamente o indirectamente, con dos factores: coste elevado y falta de suficiente fiabilidad/robustez. Una de las consecuencias del desarrollo de arquitecturas "ad-hoc" que caracteriza actualmente las redes de sensores inalámbricas es la de garantizar una gran cantidad de óptimos locales siendo la causa principal de una preocupante ausencia de estándares tanto en términos de protocolos de comunicación como en términos de organización y representación de información. También nuevos modelos de negocio y de explotación dentro de las organizaciones virtuales de última generación son actualmente temas de atención en el seno de la comunidad científica internacional. Este trabajo se sitúa en el marco de las últimas líneas de investigación orientadas a conciliar soluciones avanzadas, caracterizadas por una ingeniería innovadora, con su aplicación efectiva en el mundo real / Pileggi ., SF. (2011). Diseño, Especificación, Validación y Aplicación de una Arquitectura modular de gestión de Redes Inalámbricas de Sensores [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/10740 / Palancia
93

Constructing Covering Arrays using Parallel Computing and Grid Computing

Avila George, Himer 10 September 2012 (has links)
A good strategy to test a software component involves the generation of the whole set of cases that participate in its operation. While testing only individual values may not be enough, exhaustive testing of all possible combinations is not always feasible. An alternative technique to accomplish this goal is called combinato- rial testing. Combinatorial testing is a method that can reduce cost and increase the effectiveness of software testing for many applications. It is based on con- structing functional test-suites of economical size, which provide coverage of the most prevalent configurations. Covering arrays are combinatorial objects, that have been applied to do functional tests of software components. The use of cov- ering arrays allows to test all the interactions, of a given size, among the input parameters using the minimum number of test cases. For software testing, the fundamental problem is finding a covering array with the minimum possible number of rows, thus reducing the number of tests, the cost, and the time expended on the software testing process. Because of the importance of the construction of (near) optimal covering arrays, much research has been carried out in developing effective methods for constructing them. There are several reported methods for constructing these combinatorial models, among them are: (1) algebraic methods, recursive methods, (3) greedy methods, and (4) metaheuristics methods. Metaheuristic methods, particularly through the application of simulated anneal- ing has provided the most accurate results in several instances to date. Simulated annealing algorithm is a general-purpose stochastic optimization method that has proved to be an effective tool for approximating globally optimal solutions to many optimization problems. However, one of the major drawbacks of the simulated an- nealing is the time it requires to obtain good solutions. In this thesis, we propose the development of an improved simulated annealing algorithm / Avila George, H. (2012). Constructing Covering Arrays using Parallel Computing and Grid Computing [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/17027 / Palancia
94

Management-Elemente für mehrdimensional heterogene Cluster

Petersen, Karsten 16 June 2003 (has links)
Diplomarbeit im Schnittgebiet von Cluster- und Grid-Computing. Einbindung verteilter Ressourcen in eine Infrastruktur. Realisierung einer einfachen Checkpointing-Umgebung.
95

An Open Framework for Developing Distributed Computing Environments for Multidisciplinary Computational Simulations

Bangalore, Purushotham Venkataramaiah 10 May 2003 (has links)
Multidisciplinary computational simulations involve interactions between distributed applications, datasets, products, resources, and users. Because the very nature of the simulation software emphasizes a single-computer, small-usership and audience, the kinds of applications that have been developed often are unfriendly to incorporation into a distributed model. However, advances in networking infrastructure, and the natural tendency for information to be geographically distributed place strong requirements on integration of single-computer codes with distributed information sources, as well as multiple computer codes that are geographically distributed in their execution. The hypothesis of this dissertation is that it is possible, via novel integration of Internet, Distributed Computing, and Grid technologies, to create a distributed computational simulation systems that satisfies the requirements of modern multidisciplinary computational simulation systems without compromising functionality, performance, or security of existing applications. Furthermore, such a system would integrate disparate applications, resources, and users and would improve the productivity of users by providing new functionality not currently available. The hypothesis is proved constructively by first prototyping the Enterprise Computational Services framework based on a multi-tier architecture using the Java 2 Enterprise Edition platform and Web Services and then two distributed systems, the Distributed Marine Environment Forecast System and Distributed Simulation System for Seismic Performance of Urban Regions, are prototyped using this enabling framework. Several interfaces to the framework are prototyped to illustrate that the same framework can be used to develop multiple front-end clients required to support different types of users within a given computational domain. The two domain specific distributed environments prototyped using the framework illustrate that the framework provides a reusable common infrastructure irrespective of the computational domain. The effectiveness and utility of the distributed system and the framework are demonstrated by using a representative collection of computational simulations. Additional benefits provided by the distributed systems in terms of new functionality provided are evaluated to determine the impact on user productivity. The key contribution of this dissertation is a reusable infrastructure that could evolve to meet the requirements of next-generation hardware and software architectures while supporting interaction between a diverse set of users and distributed computational resources and multidisciplinary applications.
96

A Grid-based Middleware for Scalable Processing of Remote Data

Glimcher, Leonid S. 24 June 2008 (has links)
No description available.
97

Runtime Algorithm Selection For Grid Environments: A Component Based Framework

Bora, Prachi 22 July 2003 (has links)
Grid environments are inherently heterogeneous. If the computational power provided by collaborations on the Grid is to be harnessed in the true sense, there is a need for applications that can automatically adapt to changes in the execution environment. The application writer should not be burdened with the job of choosing the right algorithm and implementation every time the resources on which the application runs are changed. A lot of research has been done in adapting applications to changing conditions. The existing systems do not address the issue of providing a unified interface to permit algorithm selection at runtime. The goal of this research is to design and develop a unified interface to applications in order to permit seamless access to different algorithms providing similar functionalities. Long running, computationally intensive scientific applications can produce huge amounts of performance data. Often, this data is discarded once the application's execution is complete. This data can be utilized in extracting information about algorithms and their performance. This information can be used to choose algorithms intelligently. The research described in this thesis aims at designing and developing a component based unified interface for runtime algorithm selection in grid environments. This unified interface is necessary so that the application code does not change if a new algorithm is used to solve the problem. The overhead associated with making the algorithm choice transparent to the application is evaluated. We use a data mining approach to algorithm selection and evaluate its potential effectiveness for scientific applications. / Master of Science
98

GEMS: A Fault Tolerant Grid Job Management System

Tadepalli, Sriram Satish 08 January 2004 (has links)
The Grid environments are inherently unstable. Resources join and leave the environment without any prior notification. Application fault detection, checkpointing and restart is of foremost importance in the Grid environments. The need for fault tolerance is especially acute for large parallel applications since the failure rate grows with the number of processors and the duration of the computation. A Grid job management system hides the heterogeneity of the Grid and the complexity of the Grid protocols from the user. The user submits a job to the Grid job management system and it finds the appropriate resource, submits the job and transfers the output files to the user upon job completion. However, current Grid job management systems do not detect application failures. The goal of this research is to develop a Grid job management system that can efficiently detect application failures. Failed jobs are restarted either on the same resource or the job is migrated to another resource and restarted. The research also aims to identify the role of local resource managers in the fault detection and migration of Grid applications. / Master of Science
99

EFFICIENT GRID COMPUTING BASED ALGORITHMS FOR POWER SYSTEM DATA ANALYSIS

Mohsin Ali Unknown Date (has links)
The role of electric power systems has grown steadily in both scope and importance over time making electricity increasingly recognized as a key to social and economic progress in many developing countries. In a sense, reliable power systems constitute the foundation of all prospering societies. The constant expansion in electric power systems, along with increased energy demand, requires that power systems become more and more complex. Such complexity results in much uncertainty which demands comprehensive reliability and security assessment to ensure reliable energy supply. Power industries in many countries are facing these challenges and are trying to increase the computational capability to handle the ever-increasing data and analytical needs of operations and planning. Moreover, the deregulated electricity markets have been in operation in a number of countries since the 1990s. During the deregulation process, vertically integrated power utilities have been reformed into competitive markets, with initial goals to improve market efficiency, minimize production costs and reduce the electricity price. Given the benefits that have been achieved by deregulation, several new challenges are also observed in the market. Due to fundamental changes to the electric power industry, traditional management and analysis methods cannot deal with these new challenges. Deterministic reliability assessment criteria still exists but it doesn’t satisfy the probabilistic nature of power systems. In the deterministic approach the worst case analysis results in excess operating costs. On the other hand, probabilistic methods are now widely accepted. The analytical method uses a mathematical formula for reliability evaluation and generates results more quickly but it needs accurate and a lot of assumptions and is not suitable for large and complex systems. Simulation based techniques take care of much uncertainty and simulates the random behavior of the system. However, it requires much computing power, memory and other computing resources. Power engineers have to run thousands of times domain simulations to determine the stability for a set of credible disturbances before dispatching. For example, security analysis is associated with the steady state and dynamic response of the power system to various disturbances. It is highly desirable to have real time security assessment, especially in the market environment. Therefore, novel analysis methods are required for power systems reliability and security in the deregulated environment, which can provide comprehensive results, and high performance computing (HPC) power in order to carry out such analysis within a limited time. Further, with the deregulation in power industry, operation control has been distributed among many organizations. The power grid is a complex network involving a range of energy resources including nuclear, fossil and renewable energy resources with many operational levels and layers including control centers, power plants and transmission and distribution systems. The energy resources are managed by different organizations in the electricity market and all these participants (including producers, consumers and operators) can affect the operational state of the power grid at any time. Moreover, adequacy analysis is an important task in power system planning and can be regarded as collaborative tasks, which demands the collaboration among the electricity market participants for reliable energy supply. Grid computing is gaining attention from power engineering experts as an ideal solution to the computational difficulties being faced by the power industry. Grid computing infrastructure involves the integrated and collaborative use of computers, networks, databases and scientific instruments owned and managed by multiple organizations. Grid computing technology offers potentially feasible support to the design and development of grid computing based infrastructure for power system reliability and security analysis. It can help in building infrastructure, which can provide a high performance computing and collaborative environment, and offer an optimal solution between cast and efficiency. While power system analysis is a vast topic, only a limited amount of research has been initiated in several places to investigate the applications of grid computing in power systems. This thesis will investigate probabilistic based reliability and security analysis of complex power systems in order to develop new techniques for providing comprehensive result with enormous efficiency. A review of existing techniques was conducted to determine the computational needs in the area of power systems. The main objective of this research is to propose and develop a general framework of computing grid and special grid services for probabilistic power system reliability and security assessment in the electricity market. As a result of this research, grid computing based techniques are proposed for power systems probabilistic load flow analysis, probabilistic small signal analysis, probabilistic transient stability analysis, and probabilistic contingencies analysis. Moreover, a grid computing based system is designed and developed for the monitoring and control of distributed generation systems. As a part of this research, a detailed review is presented about the possible applications of this technology in other aspects of power systems. It is proposed that these grid based techniques will provide comprehensive results that will lead to great efficiency, and ultimately enhance the existing computing capabilities of power companies in a cost-effective manner. At a part of this research, a small scale computing grid is developed which will consist of grid services for probabilistic reliability and security assessment techniques. A significant outcome of this research will be the improved performance, accuracy, and security of data sharing and collaboration. More importantly grid based computing will improve the capability of power system analysis in a deregulated environment where complex and large amounts of data would otherwise be impossible to analyze without huge investments in computing facilities.
100

EFFICIENT GRID COMPUTING BASED ALGORITHMS FOR POWER SYSTEM DATA ANALYSIS

Mohsin Ali Unknown Date (has links)
The role of electric power systems has grown steadily in both scope and importance over time making electricity increasingly recognized as a key to social and economic progress in many developing countries. In a sense, reliable power systems constitute the foundation of all prospering societies. The constant expansion in electric power systems, along with increased energy demand, requires that power systems become more and more complex. Such complexity results in much uncertainty which demands comprehensive reliability and security assessment to ensure reliable energy supply. Power industries in many countries are facing these challenges and are trying to increase the computational capability to handle the ever-increasing data and analytical needs of operations and planning. Moreover, the deregulated electricity markets have been in operation in a number of countries since the 1990s. During the deregulation process, vertically integrated power utilities have been reformed into competitive markets, with initial goals to improve market efficiency, minimize production costs and reduce the electricity price. Given the benefits that have been achieved by deregulation, several new challenges are also observed in the market. Due to fundamental changes to the electric power industry, traditional management and analysis methods cannot deal with these new challenges. Deterministic reliability assessment criteria still exists but it doesn’t satisfy the probabilistic nature of power systems. In the deterministic approach the worst case analysis results in excess operating costs. On the other hand, probabilistic methods are now widely accepted. The analytical method uses a mathematical formula for reliability evaluation and generates results more quickly but it needs accurate and a lot of assumptions and is not suitable for large and complex systems. Simulation based techniques take care of much uncertainty and simulates the random behavior of the system. However, it requires much computing power, memory and other computing resources. Power engineers have to run thousands of times domain simulations to determine the stability for a set of credible disturbances before dispatching. For example, security analysis is associated with the steady state and dynamic response of the power system to various disturbances. It is highly desirable to have real time security assessment, especially in the market environment. Therefore, novel analysis methods are required for power systems reliability and security in the deregulated environment, which can provide comprehensive results, and high performance computing (HPC) power in order to carry out such analysis within a limited time. Further, with the deregulation in power industry, operation control has been distributed among many organizations. The power grid is a complex network involving a range of energy resources including nuclear, fossil and renewable energy resources with many operational levels and layers including control centers, power plants and transmission and distribution systems. The energy resources are managed by different organizations in the electricity market and all these participants (including producers, consumers and operators) can affect the operational state of the power grid at any time. Moreover, adequacy analysis is an important task in power system planning and can be regarded as collaborative tasks, which demands the collaboration among the electricity market participants for reliable energy supply. Grid computing is gaining attention from power engineering experts as an ideal solution to the computational difficulties being faced by the power industry. Grid computing infrastructure involves the integrated and collaborative use of computers, networks, databases and scientific instruments owned and managed by multiple organizations. Grid computing technology offers potentially feasible support to the design and development of grid computing based infrastructure for power system reliability and security analysis. It can help in building infrastructure, which can provide a high performance computing and collaborative environment, and offer an optimal solution between cast and efficiency. While power system analysis is a vast topic, only a limited amount of research has been initiated in several places to investigate the applications of grid computing in power systems. This thesis will investigate probabilistic based reliability and security analysis of complex power systems in order to develop new techniques for providing comprehensive result with enormous efficiency. A review of existing techniques was conducted to determine the computational needs in the area of power systems. The main objective of this research is to propose and develop a general framework of computing grid and special grid services for probabilistic power system reliability and security assessment in the electricity market. As a result of this research, grid computing based techniques are proposed for power systems probabilistic load flow analysis, probabilistic small signal analysis, probabilistic transient stability analysis, and probabilistic contingencies analysis. Moreover, a grid computing based system is designed and developed for the monitoring and control of distributed generation systems. As a part of this research, a detailed review is presented about the possible applications of this technology in other aspects of power systems. It is proposed that these grid based techniques will provide comprehensive results that will lead to great efficiency, and ultimately enhance the existing computing capabilities of power companies in a cost-effective manner. At a part of this research, a small scale computing grid is developed which will consist of grid services for probabilistic reliability and security assessment techniques. A significant outcome of this research will be the improved performance, accuracy, and security of data sharing and collaboration. More importantly grid based computing will improve the capability of power system analysis in a deregulated environment where complex and large amounts of data would otherwise be impossible to analyze without huge investments in computing facilities.

Page generated in 0.3702 seconds