• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 81
  • 76
  • 14
  • 8
  • 7
  • 3
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 230
  • 230
  • 67
  • 51
  • 50
  • 41
  • 38
  • 36
  • 35
  • 34
  • 31
  • 28
  • 23
  • 22
  • 22
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
111

Uma ferramenta para monitoramento do sistema JoiN de processamento maciçamente paralelo virtual / A monitoring tool for the massively parallel virtual processing system JoiN

Pereira, Ana Maria de Seixas, Universidade Estadual de Campinas. Faculdade de Engenharia Elétrica e de Computação. Programa de Pós-Graduação em Engenharia Elétrica 12 August 2018 (has links)
Orientador: Marco Aurelio Amaral Henriques / Dissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Eletrica e Computação / Made available in DSpace on 2018-08-12T06:09:14Z (GMT). No. of bitstreams: 1 Pereira_AnaMariadeSeixas_M.pdf: 2667642 bytes, checksum: d79dd53fe62b9832524a344ff3c494ee (MD5) Previous issue date: 2008 / Resumo: O gerenciamento e a utilização de ambientes de computação em grade requerem um grande esforço na obtenção das informações necessárias para a administração e para a identificação e resolução de problemas. Acompanhar o desempenho das aplicações, determinar a origem de problemas, identificar e eliminar gargalos são funções que requerem informações detalhadas sobre a plataforma e sobre as aplicações sendo executadas. Neste trabalho apresentamos uma proposta para implementação de uma ferramenta para monitoramento do sistema JoiN de processamento maciçamente paralelo virtual. Após a análise de algumas das ferramentas hoje existentes para monitoramento de ambientes distribuídos não identificamos entre elas uma solução que satisfaça os requisitos do sistema JoiN e, por esta razão, implementamos uma nova ferramenta que procura integrar as melhores características encontradas nas ferramentas analisadas. As principais dificuldades encontradas na implementação desta ferramenta são relacionadas à obtenção e à publicação de um grande número de informações que permitam a observação e o monitoramento do sistema JoiN e que facilitem seu uso e sua administração, interferindo o mínimo possível com seu desempenho. Os resultados obtidos mostram que a ferramenta implementada no sistema JoiN oferece uma boa relação custo/benefício no monitoramento das principais funções deste sistema sem causar impactos significativos na execução de suas aplicações paralelas. / Abstract: Grid Computing environments management and utilization require a great effort in gathering necessary information for site administration and identification/resolution of problems. Discovering applications performance, determining the origin of problems, identifying and eliminating bottlenecks are functions thar require detailed information about the platform and the applications running. In this work we present a proposal for implementation of a monitoring tool for JoiN system, a massively virtual environment for parallel processing. After the analysis of some of the available monitoring tools for distributed environments, we could not identify among them a solution that meets JoiN requirements and, therefore, we decided to implemented a new tool which seeks to incorporate the best features found on analyzed tools. The main difficulties found in the implementation of this tool are related to the collection and publication of a large number of information that allow JoiN observation and monitoring and that helps its administration and use, interfering as little as possible with its performance. The results show that the tool implemented in JoiN system offers a good cost/benefit relationship in monitoring the main functions of this system without causing significant impacts in the execution of parallel applications. / Mestrado / Engenharia de Computação / Mestre em Engenharia Elétrica
112

Support for Information Management in Virtual Organizations / Support for Information Management in Virtual Organizations

Yadav, Pavan Kumar, Kalyan, Kosuri Naga Krishna January 2006 (has links)
Globalization and innovation are revolutionizing the higher education forcing to create new market trends. Different nations have their own pattern and framework of education in delivering the educational services. Educational institutions are also seeking different organizational and behavioural changes for their better future as they hunt for new financial resources, face new competition and seek greater prestige domestically and internationally. The coming future will decide which universities would survive the market trends, competition and expectations of the students (Clients). The survival-of-the-fittest paradigm framework plays a prominent role in ideas of how the higher education would be delivered to the students in future with the Instruction Technology and distance education. According to us the education trend has changed its phase of delivery of services form the management point of view to student’s point of view. Leading to delivery of educational service’s which would have more impact on student’s education, knowledge and experience within the institution. In our thesis we try to provide some information about how to support and manage the information in Virtual Organizations. We also explore the frameworks of the university and discussed a case study about the different ways of providing better support for information management resulting in delivery of best students driven services and unique facilities. We would be looking at the different aspects of the university work flows and procedures and gain an insight on the student’s expectation from the organization. This investigation would be helpful for the students to know what are the services they should expect from the universities and also helpful for management to know better the needs of the students and their needs and to develop a framework for proper execution of these services. / Pavan Kumar Yadav, S/o: B.R.Basant Kumar Yadav, Hno: 291,292, Lalbazar, Trimulgherry, Secunderabad, Andhra Pradesh, India 500015. PH: (+91)(040)27793414
113

Decentralized resource brokering for heterogeneous grid environments

Tordsson, Johan January 2006 (has links)
The emergence of Grid computing infrastructures enables researchers to share resources and collaborate in more efficient ways than before, despite belonging to different organizations and being distanced geographically. While the Grid computing paradigm offers new opportunities, it also gives rise to new difficulties. One such problem is the selection of resources for user applications. Given the large and disparate set of Grid resources, manual resource selection becomes impractical, even for experienced users. This thesis investigates methods, algorithms and software for a Grid resource broker, i.e., a scheduling agent that automates the resource selection process for the user. The development of such a component is a non-trivial task as Grid resources are heterogeneous in hardware, software, availability, ownership and usage policies. A wide range of algorithmically difficult issues must also be solved, including characterization of jobs, prediction of resource performance, data placement considerations, and, how to provide Quality of Service guarantees. One contribution of this thesis is the development of resource brokering algorithms that enable resource selection based on Grid job performance predictions and use advance reservations to provide Quality of Service guarantees. The thesis also includes an algorithm for coallocation of sets of jobs. This algorithm guarantees a simultaneous start of each subjob, as required e.g., when running larger-than-supercomputer simulations that involve multiple resources. We today have the somewhat paradoxal situation where Grids, originally aimed to overcome interoperability problems between different computing platforms, themselves struggle with interoperability problems caused by the wide range of interfaces, protocols and data formats that are used in different environments. The reasons for this situation are obvious, expected and almost impossible to avoid, as the task of defining appropriate standards, models and best-practices must be preceded by basic research, proof-of-concept implementations and real-world testing. We address the interoperability problem with a generic Grid resource brokering architecture and job submission service. By using (proposed) standard formats and protocols, the service acts as an interoperability-bridge that translates job requests between clients and resources running different Grid middlewares. This concept is demonstrated by the integration of the service with three different Grid middlewares. The service also enables users to both fine-tune the existing resource selection algorithms and plug in custom brokering algorithms tailored to their requirements.
114

Distribuovaný systém kryptoanalýzy / Distributed systems for cryptoanalysys

Zelinka, Miloslav Unknown Date (has links)
This work deals with crytpoanalysis, calculation performance and its distribution. It describes the methods of distributing the calculation performance for the needs of crypto analysis. Further it focuses on other methods allowing the speed increasing in breaking the cryptographic algorithms especially by means of the hash functions. The work explains the relatively new term of cloud computing and its consecutive use in cryptography. The examples of its practical utilisation follow. Also this work deals with possibility how to use grid computing for needs of cryptoanalysis. At last part of this work is system design using „cloud computing“ for breaking access password.
115

Diseño, Especificación, Validación y Aplicación de una Arquitectura modular de gestión de Redes Inalámbricas de Sensores

Pileggi ., Salvatore Flavio 15 April 2011 (has links)
Durante los últimos años las redes de sensores inalámbricas han sido objeto, como consecuencia de un creciente interés comercial, de una intensa actividad de investigación que ha determinado relevantes avances tanto en la tecnología base como en los aspectos de ingeniería a todos los niveles. Las redes de sensores inalámbricas se basan en el concepto de nodo sensor autónomo de bajo coste que proporciona recursos limitados en términos de cálculo y capacidad de almacenamiento de información, baja potencia de transmisión y sensorica avanzada. Se caracterizan por el tamaño extremadamente reducido y una ingeniería orientada a la eficiencia energética. A pesar de la disponibilidad de soluciones altamente avanzadas, caracterizadas por la eficiencia y la flexibilidad, la difusión comercial masiva se ha planteado más veces como hipótesis plausible y además parece tardar en concretarse de forma definitiva. Las principales causas están relacionadas, directamente o indirectamente, con dos factores: coste elevado y falta de suficiente fiabilidad/robustez. Una de las consecuencias del desarrollo de arquitecturas "ad-hoc" que caracteriza actualmente las redes de sensores inalámbricas es la de garantizar una gran cantidad de óptimos locales siendo la causa principal de una preocupante ausencia de estándares tanto en términos de protocolos de comunicación como en términos de organización y representación de información. También nuevos modelos de negocio y de explotación dentro de las organizaciones virtuales de última generación son actualmente temas de atención en el seno de la comunidad científica internacional. Este trabajo se sitúa en el marco de las últimas líneas de investigación orientadas a conciliar soluciones avanzadas, caracterizadas por una ingeniería innovadora, con su aplicación efectiva en el mundo real / Pileggi ., SF. (2011). Diseño, Especificación, Validación y Aplicación de una Arquitectura modular de gestión de Redes Inalámbricas de Sensores [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/10740 / Palancia
116

Constructing Covering Arrays using Parallel Computing and Grid Computing

Avila George, Himer 10 September 2012 (has links)
A good strategy to test a software component involves the generation of the whole set of cases that participate in its operation. While testing only individual values may not be enough, exhaustive testing of all possible combinations is not always feasible. An alternative technique to accomplish this goal is called combinato- rial testing. Combinatorial testing is a method that can reduce cost and increase the effectiveness of software testing for many applications. It is based on con- structing functional test-suites of economical size, which provide coverage of the most prevalent configurations. Covering arrays are combinatorial objects, that have been applied to do functional tests of software components. The use of cov- ering arrays allows to test all the interactions, of a given size, among the input parameters using the minimum number of test cases. For software testing, the fundamental problem is finding a covering array with the minimum possible number of rows, thus reducing the number of tests, the cost, and the time expended on the software testing process. Because of the importance of the construction of (near) optimal covering arrays, much research has been carried out in developing effective methods for constructing them. There are several reported methods for constructing these combinatorial models, among them are: (1) algebraic methods, recursive methods, (3) greedy methods, and (4) metaheuristics methods. Metaheuristic methods, particularly through the application of simulated anneal- ing has provided the most accurate results in several instances to date. Simulated annealing algorithm is a general-purpose stochastic optimization method that has proved to be an effective tool for approximating globally optimal solutions to many optimization problems. However, one of the major drawbacks of the simulated an- nealing is the time it requires to obtain good solutions. In this thesis, we propose the development of an improved simulated annealing algorithm / Avila George, H. (2012). Constructing Covering Arrays using Parallel Computing and Grid Computing [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/17027 / Palancia
117

Management-Elemente für mehrdimensional heterogene Cluster

Petersen, Karsten 16 June 2003 (has links)
Diplomarbeit im Schnittgebiet von Cluster- und Grid-Computing. Einbindung verteilter Ressourcen in eine Infrastruktur. Realisierung einer einfachen Checkpointing-Umgebung.
118

Multi-Agent Based Simulations in the Grid Environment

Mengistu, Dawit January 2007 (has links)
The computational Grid has become an important infrastructure as an execution environment for scientific applications that require large amount of computing resources. Applications which would otherwise be unmanageable or take a prohibitively longer execution time under previous computing paradigms can now be executed efficiently on the Grid within a reasonable time. Multi-agent based simulation (MABS) is a methodology used to study and understand the dynamics of real world phenomena in domains involving interaction and/or cooperative problem solving where the participants are characterized by entities having autonomous and social behaviour. For certain domains the size of the simulation is extremely large, intractable without employing adequate computing resources such as the Grid. Although the Grid has come with immense opportunities to resource demanding applications such as MABS, it has also brought with it a number of challenges related to performance. Performance problems may have their origins either on the side of the computing infrastructure or the application itself, or both. This thesis aims at improving the performance of MABS applications by overcoming problems inherent to the behaviour of MABS applications. It also studies the extent to which the MABS technologies have been exploited in the field of simulation and find ways to adapt existing technologies for the Grid. It investigates performance monitoring and prediction systems in the Grid environment and their implementation for MABS application with the purpose of identifying application related performance problems and their solutions. Our research shows that large-scale MABS applications have not been implemented despite the fact that many problem domains that cannot be studied properly with only partial simulation. We assume that this is due to the lack of appropriate tools such as MABS platforms for the Grid. Another important finding of this work is the improvement of application performance through the use of MABS specific middleware.
119

Metadata Management in Multi-Grids and Multi-Clouds

Espling, Daniel January 2011 (has links)
Grid computing and cloud computing are two related paradigms used to access and use vast amounts of computational resources. The resources are often owned and managed by a third party, relieving the users from the costs and burdens of acquiring and managing a considerably large infrastructure themselves. Commonly, the resources are either contributed by different stakeholders participating in shared projects (grids), or owned and managed by a single entity and made available to its users with charging based on actual resource consumption (clouds). Individual grid or cloud sites can form collaborations with other sites, giving each site access to more resources that can be used to execute tasks submitted by users. There are several different models of collaborations between sites, each suitable for different scenarios and each posing additional requirements on the underlying technologies. Metadata concerning the status and resource consumption of tasks are created during the execution of the task on the infrastructure. This metadata is used as the primary input in many core management processes, e.g., as a base for accounting and billing, as input when prioritizing and placing incoming task, and as a base for managing the amount of resources allocated to different tasks. Focusing on management and utilization of metadata, this thesis contributes to a better understanding of the requirements and challenges imposed by different collaboration models in both grids and clouds. The underlying design criteria and resulting architectures of several software systems are presented in detail. Each system addresses different challenges imposed by cross-site grid and cloud architectures: The LUTSfed approach provides a lean and optional mechanism for filtering and management of usage data between grid or cloud sites. An accounting and billing system natively designed to support cross-site clouds demonstrates usage data management despite unknown placement and dynamic task resource allocation. The FSGrid system enables fairshare job prioritization across different grid sites, mitigating the problems of heterogeneous scheduling software and local management policies. The results and experiences from these systems are both theoretical and practical, as full scale implementations of each system has been developed and analyzed as a part of this work. Early theoretical work on structure-based service management forms a foundation for future work on structured-aware service placement in cross- site clouds.
120

Peer to Peer Grid for Software Development : Improving community based software development using community based grids

Sarrafi, Ali January 2011 (has links)
Today, the number of software projects having large number of developers distributed all over the world is increasing rapidly. This rapid growth in distributed software development, increases the need for new tools and environments to facilitate the developers’ communication, collaboration and cooperation. Distributed revision control systems, such as Git or Bazaar, are examples oftools that have evolved to improve the quality of development in such projects. In addition, building and testing large scale cross platform software is especially hard for individual developers in an open source development community, dueto their lack of powerful and diverse computing resources.Computational grids are networks of computing resources that are geographically distributed and can be used to run complex tasks very efficiently by exploiting parallelism. However these systems are often configured for cloud computing and use a centralized structure which reduces their scalability and fault tolerance. Pure peer-to-peer (P2P) systems, on the other hand are networks without a central structure. P2P systems are highly scalable, flexible, dynamically adaptable and fault tolerant. Introducing P2P and grid computing together tothe software development process can significantly increase the access to more computing resource by individual developers distributed all over the world. In this master thesis we evaluated the possibilities of integrating these technologies with software development and the associated test cycle in order to achieve better software quality in community driven software development. The main focus of this project was on the mechanisms of data transfer, management, and dependency among peers as well as investigating the performance/overhead ratio of these technologies. For our evaluation we used the MoSync Software Development Kit (SDK), a cross platform mobile software solution, as a case study and developed and evaluated a prototype for the distributed development of this system. Our measurements show that using our prototype the time required for building MoSync SDK’s is approximately six times shorter than using a single process. We have also proposed a method for near optimum task distribution over peer to peer grids that are used for build and test. / Idag är antalet programvaruprojekt med stort antal utvecklare distribueras överh ela världen ökar snabbt. Denna snabba tillväxt i distribuerad mjukvaruutveckling, ökar behovet av nya verktyg och miljöer för att underlätta utvecklarnas kommunikation, samarbete och samarbete. Distribuerat versionshanteringssystem,såsom Git och Bazaar, är exempel påverktyg som har utvecklats för att för bättra kvaliteten påutvecklingen i sådana projekt. Dessutom, bygga ochtesta storskalig programvara plattformsoberoende är särskilt svrt för enskilda utvecklare i en öppen källkod utvecklingsgemenskap, pågrund av deras brist påkraftfulla och mångsidiga datorresurser. Datorgridd är nätverk av IT-resurser som är geografiskt f¨ordelade och kan användas för att köra komplexa uppgifter mycket effektivt genom att utnyttja parallellitet. Men dessa system är ofta konfigurerade för molndator och användaen centraliserad struktur vilket minskar deras skalbarhet och feltolerans. En ren icke-hierarkiskt (P2P-n¨atverk) system, åandra sidan är nätverk utan en central struktur. P2P-systemen är skalbara, flexibla, dynamiskt anpassningsbar och feltolerant. Introduktion P2P och datorgridd tillsammans med mjukvaruutveckling processen kan avsevärt öka tillgången till merdatorkraft resurs genom enskilda utvecklare distribueras över hela världen. I detta examensarbete har vi utvärderat möjligheterna att integrera dessa tekniker med utveckling av programvara och tillhörande testcykel för att uppnåbättre programvara kvalitet i samhället drivs mjukvaruutveckling. Tyngdpunkten i detta projekt var på mekanismerna för överföring av data, hantering,och beroendet bland kamrater samt undersöka prestanda / overhead förhllandet mellan dessa tekniker. För vr utvärdering använde vi MoSync SoftwareDevelopment Kit (SDK), en plattformsoberoende mobil programvara lösning,som en fallstudie och utvecklat och utvärderat en prototyp f¨or distribueradutveckling av detta system. Våra mätningar visar att med hjälp av vår prototypden tid som krävs f¨or att bygga MoSync SDK är cirka sex gånger kortare änmed en enda process. Vi har också föreslagit en metod för nära optimal uppgiftf¨ordelning ¨over peer to peer nät som används f¨or att bygga och testa.

Page generated in 0.1281 seconds