• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 81
  • 76
  • 14
  • 8
  • 7
  • 3
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 230
  • 230
  • 67
  • 51
  • 50
  • 41
  • 38
  • 36
  • 35
  • 34
  • 31
  • 28
  • 23
  • 22
  • 22
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
81

A Desktop Grid Computing Approach for Scientific Computing and Visualization

Constantinescu-Fuløp, Zoran January 2008 (has links)
Scientific Computing is the collection of tools, techniques, and theories required to solve on a computer, mathematical models of problems from science and engineering, and its main goal is to gain insight in such problems. Generally, it is difficult to understand or communicate information from complex or large datasets generated by Scientific Computing methods and techniques (computational simulations, complex experiments, observational instruments etc.). Therefore, support of Scientific Visualization is needed, to provide the techniques, algorithms, and software tools needed to extract and display appropriately important information from numerical data. Usually, complex computational and visualization algorithms require large amounts of computational power. The computing power of a single desktop computer is insufficient for running such complex algorithms, and, traditionally, large parallel supercomputers or dedicated clusters were used for this job. However, very high initial investments and maintenance costs limit the availability of such systems. A more convenient solution, which is becoming more and more popular, is based on the use of nondedicated desktop PCs in a Desktop Grid Computing environment. Harnessing idle CPU cycles, storage space and other resources of networked computers to work together on a particularly computational intensive application does this. Increasing power and communication bandwidth of desktop computers provides for this solution. In a desktop grid system, the execution of an application is orchestrated by a central scheduler node, which distributes the tasks amongst the worker nodes and awaits workers’ results. An application only finishes when all tasks have been completed. The attractiveness of exploiting desktop grids is further reinforced by the fact that costs are highly distributed: every volunteer supports her resources (hardware, power costs and internet connections) while the benefited entity provides management infrastructures, namely network bandwidth, servers and management services, receiving in exchange a massive and otherwise unaffordable computing power. The usefulness of desktop grid computing is not limited to major high throughput public computing projects. Many institutions, ranging from academics to enterprises, hold vast number of desktop machines and could benefit from exploiting the idle cycles of their local machines. In the work presented in this thesis, the central idea has been to provide a desktop grid computing framework and to prove its viability by testing it in some Scientific Computing and Visualization experiments. We present here QADPZ, an open source system for desktop grid computing that have been developed to meet the above presented needs. QADPZ enables users from a local network or Internet to share their resources. It is a multi-platform, heterogeneous system, where different computing resources from inside an organization can be used. It can be used also for volunteer computing, where the communication infrastructure is the Internet. QADPZ supports the following native operating systems: Linux, Windows, MacOS and Unix variants. The reason behind natively supporting multiple operating systems, and not only one (Unix or Windows, as other systems do), is that often, in real life, this kind of limitation restricts very much the usability of desktop grid computing. QADPZ provides a flexible object-oriented software framework that makes it easy for programmers to write various applications, and for researchers to address issues such as adaptive parallelism, fault-tolerance, and scalability. The framework supports also the execution of legacy applications, which for different reasons could not be rewritten, and that makes it suitable for other domains as business. It also supports low-level programming languages as C/C++ or high-level language applications, (e.g. Lisp, Python, and Java), and provides the necessary mechanisms to use such applications in a computation. Consequently, users with various backgrounds can benefit from using QADPZ. The flexible object-oriented structure and the modularity allow facile improvements and further extensions to other programming languages. We have developed a general-purpose runtime and an API to support new kinds of high performance computing applications, and therefore to benefit from the advantages offered by desktop grid computing. This API directly supports the C/C++ programming language. We have shown how distributed computing extends beyond the master-worker paradigm (typical for such systems) and provided QADPZ with an extended API that supports in addition lightweight tasks and parallel computing (using the message passing paradigm - MPI). This extends the range of applications that can be used to already existing MPI based applications - e.g. parallel numerical solvers used in computational science, or parallel visualization algorithms. Another restriction of existing systems, especially middleware based, is that each resource provider needs to install a runtime module with administrator privileges. This poses some issues regarding data integrity and accessibility on providers computers. The QADPZ system tries to overcome this by allowing the middleware module to run as a non-privileged user, even with restricted access, to the local system. QADPZ provides also low-level optimizations, such as on-the-fly compression and encryption for communication. The user can choose from different algorithms, depending on the application, improving both the communication overhead imposed by large data transfers and keeping privacy of the data. The system goes further, by providing an experimental, adaptive compression algorithm, which can transparently choose different algorithms to improve the application. QADPZ support two different protocols (UDP and TCP/IP) in order to improve the efficiency of communication. Free source code allows its flexible installations and modifications based on the particular needs of research projects and institutions. In addition to being a very powerful tool for computationally intensive research, the open sourceness makes QADPZ a flexible educational platform for numerous smallsize student projects in the areas of operating systems, distributed systems, mobile agents, parallel algorithms, etc. Open source software is a natural choice for modern research as well, because it encourages effectively integration, cooperation and boosting of new ideas. This thesis proposes also an improved conceptual model (based on the master-worker paradigm), which makes contributions in several directions: pull vs. push work-units, pipelining of work-units, more work-units sent at a time, adaptive number of workers, adaptive time-out interval for work-units, and multithreading. We have also demonstrated that the use of desktop grids should not be limited to only master-worker applications, but it can be used for more fine-grained parallel Scientific Computing and Visualization applications, by performing some specific experiments. This thesis makes supplementary contributions: a hierarchical taxonomy of the main existing desktop grids, and an adaptive compression algorithm for remote visualization. QADPZ has also pioneered autonomic computing approach for desktop grids and presents specific self-management features: self-knowledge, self-configuration, selfoptimization and self-healing. It is worth to mention that to the present the QADPZ has over a thousand users who have download it (since July, 2001 when it has been uploaded to sourceforge.net), and many of them use it for their daily tasks (see the appendix). Many of the results have been published or are in course of publishing as it can be seen from the references.
82

Grid-Enabled Automatic Web Page Classification

Metikurke, Seema Sreenivasamurthy 12 June 2006 (has links)
Much research has been conducted on the retrieval and classification of web-based information. A big challenge is the performance issue, especially for a classification algorithm returning results for a large set of data that is typical when accessing the Web. This thesis describes a grid-enabled approach for automatic web page classification. The basic approach is first described that uses a vector space model (VSM). An enhancement of the approach through the use of a genetic algorithm (GA) is then described. The enhanced approach can efficiently process candidate web pages from a number of web sites and classify them. A prototype is implemented and empirical studies are conducted. The contributions of this thesis are: 1) Application of grid computing to improve performance of both VSM and GA using VSM based web page classification; 2) Improvement of the VSM classification algorithm by applying GA that uniquely discovers a set of training web pages while also generating a near optimal parameter values set for VSM.
83

Investigation of service selection algorithms for grid services

Guha, Tapashree 15 September 2009
Grid computing has emerged as a global platform to support organizations for coordinated sharing of distributed data, applications, and processes. Additionally, Grid computing has also leveraged web services to define standard interfaces for Grid services adopting the service-oriented view. Consequently, there have been significant efforts to enable applications capable of tackling computationally intensive problems as services on the Grid. In order to ensure that the available services are assigned to the high volume of incoming requests efficiently, it is important to have a robust service selection algorithm. The selection algorithm should not only increase access to the distributed services, promoting operational flexibility and collaboration, but should also allow service providers to scale efficiently to meet a variety of demands while adhering to certain current Quality of Service (QoS) standards. In this research, two service selection algorithms, namely the Particle Swarm Intelligence based Service Selection Algorithm (PSI Selection Algorithm) based on the Multiple Objective Particle Swarm Optimization algorithm using Crowding Distance technique, and the Constraint Satisfaction based Selection (CSS) algorithm, are proposed. The proposed selection algorithms are designed to achieve the following goals: handling large number of incoming requests simultaneously; achieving high match scores in the case of competitive matching of similar types of incoming requests; assigning each services efficiently to all the incoming requests; providing the service requesters the flexibility to provide multiple service selection criteria based on a QoS metric; selecting the appropriate services for the incoming requests within a reasonable time. Next, the two algorithms are verified by a standard assignment problem algorithm called the Munkres algorithm. The feasibility and the accuracy of the proposed algorithms are then tested using various evaluation methods. These evaluations are based on various real world scenarios to check the accuracy of the algorithm, which is primarily based on how closely the requests are being matched to the available services based on the QoS parameters provided by the requesters.
84

Enabling Technologies for Management of Distributed Computing Infrastructures

Espling, Daniel January 2013 (has links)
Computing infrastructures offer remote access to computing power that can be employed, e.g., to solve complex mathematical problems or to host computational services that need to be online and accessible at all times. From the perspective of the infrastructure provider, large amounts of distributed and often heterogeneous computer resources need to be united into a coherent platform that is then made accessible to and usable by potential users. Grid computing and cloud computing are two paradigms that can be used to form such unified computational infrastructures. Resources from several independent infrastructure providers can be joined to form large-scale decentralized infrastructures. The primary advantage of doing this is that it increases the scale of the available resources, making it possible to address more complex problems or to run a greater number of services on the infrastructures. In addition, there are advantages in terms of factors such as fault-tolerance and geographical dispersion. Such multi-domain infrastructures require sophisticated management processes to mitigate the complications of executing computations and services across resources from different administrative domains. This thesis contributes to the development of management processes for distributed infrastructures that are designed to support multi-domain environments. It describes investigations into how fundamental management processes such as scheduling and accounting are affected by the barriers imposed by multi-domain deployments, which include technical heterogeneity, decentralized and (domain-wise) self-centric decision making, and a lack of information on the state and availability of remote resources. Four enabling technologies or approaches are explored and developed within this work: (I) The use of explicit definitions of cloud service structure as inputs for placement and management processes to ensure that the resulting placements respect the internal relationships between different service components and any relevant constraints. (II) Technology for the runtime adaptation of Virtual Machines to enable the automatic adaptation of cloud service contexts in response to changes in their environment caused by, e.g., service migration across domains. (III) Systems for managing meta-data relating to resource usage in multi-domain grid computing and cloud computing infrastructures. (IV) A global fairshare prioritization mechanism that enables computational jobs to be consistently prioritized across a federation of several decentralized grid installations. Each of these technologies will facilitate the emergence of decentralized computational infrastructures capable of utilizing resources from diverse infrastructure providers in an automatic and seamless manner. / <p>Note that the author changed surname from Henriksson to Espling in 2011</p>
85

Investigation of service selection algorithms for grid services

Guha, Tapashree 15 September 2009 (has links)
Grid computing has emerged as a global platform to support organizations for coordinated sharing of distributed data, applications, and processes. Additionally, Grid computing has also leveraged web services to define standard interfaces for Grid services adopting the service-oriented view. Consequently, there have been significant efforts to enable applications capable of tackling computationally intensive problems as services on the Grid. In order to ensure that the available services are assigned to the high volume of incoming requests efficiently, it is important to have a robust service selection algorithm. The selection algorithm should not only increase access to the distributed services, promoting operational flexibility and collaboration, but should also allow service providers to scale efficiently to meet a variety of demands while adhering to certain current Quality of Service (QoS) standards. In this research, two service selection algorithms, namely the Particle Swarm Intelligence based Service Selection Algorithm (PSI Selection Algorithm) based on the Multiple Objective Particle Swarm Optimization algorithm using Crowding Distance technique, and the Constraint Satisfaction based Selection (CSS) algorithm, are proposed. The proposed selection algorithms are designed to achieve the following goals: handling large number of incoming requests simultaneously; achieving high match scores in the case of competitive matching of similar types of incoming requests; assigning each services efficiently to all the incoming requests; providing the service requesters the flexibility to provide multiple service selection criteria based on a QoS metric; selecting the appropriate services for the incoming requests within a reasonable time. Next, the two algorithms are verified by a standard assignment problem algorithm called the Munkres algorithm. The feasibility and the accuracy of the proposed algorithms are then tested using various evaluation methods. These evaluations are based on various real world scenarios to check the accuracy of the algorithm, which is primarily based on how closely the requests are being matched to the available services based on the QoS parameters provided by the requesters.
86

A Grid-based Seismic Hazard Analysis Application

Kocair, Celebi 01 September 2010 (has links) (PDF)
The results of seismic hazard analysis (SHA) play a crucial role in assessing seismic risks and mitigating seismic hazards. SHA calculations generally involve magnitude and distance distribution models, and ground motion prediction models as components. Many alternatives have been proposed for these component models. SHA calculations may be demanding in terms of processing power depending on the models and analysis parameters involved, and especially the size of the site for which the analysis is to be performed. In this thesis, we develop a grid-based SHA application which provides the necessary computational power and enables the investigation of the effects of applying different models. Our application not only includes various already implemented component models but also allows integration of newly developed ones.
87

Development Of A Grid-aware Master Worker Framework For Artificial Evolution

Ketenci, Ahmet 01 December 2010 (has links) (PDF)
Genetic Algorithm (GA) has become a very popular tool for various kinds of problems, including optimization problems with wider search spaces. Grid search techniques are usually not feasible or ineffective at finding a solution, which is good enough. The most computationally intensive component of GA is the calculation of the goodness (fitness) of candidate solutions. However, since the fitness calculation of each individual does not depend each other, this process can be parallelized easily. The easiest way to reach high amounts of computational power is using grid. Grids are composed of multiple clusters, thus they can offer much more resources than a single cluster. On the other hand, grid may not be the easiest environment to develop parallel programs, because of the lack of tools or libraries that can be used for communication among the processes. In this work, we introduce a new framework, GridAE, for GA applications. GridAE uses the master worker model for parallelization and offers a GA library to users. It also abstracts the message passing process from users. Moreover, it has both command line interface and web interface for job management. These properties makes the framework more usable for developers even with limited parallel programming or grid computing experience. The performance of GridAE is tested with a shape optimization problem and results show that the framework is more convenient to problems with crowded populations.
88

Workshop Mensch-Computer-Vernetzung

Hübner, Uwe 15 October 2003 (has links)
Workshop Mensch-Computer-Vernetzung vom 14.-17. April 2003 in Löbsal (bei Meißen)
89

Ρευστομηχανική και grid

Κωνσταντινίδης, Νικόλαος 30 April 2014 (has links)
Η ανάγκη για την επίλυση μεγάλων προβλημάτων και η εξέλιξη της τεχνολογίας του διαδικτύου, είχε ως αποτέλεσμα την διαρκή ανάγκη για την εύρεση όλο και περισσότερων πόρων. Η ανάγκη αυτή οδήγησε στην δημιουργία δομών συνεργαζόμενων υπολογιστικών συστημάτων, με απώτερο σκοπό την επίλυση προβλημάτων που απαιτούν μεγάλη υπολογιστική ισχύ ή την αποθήκευση μεγάλου όγκου δεδομένων. Η ύπαρξη τέτοιων δομών αλλά και κεντρικών μονάδων επεξεργασίας με περισσότερους από έναν επεξεργαστές, δημιούργησε πρωτόκολλα για την δημιουργία εφαρμογών που θα εκτελούνται και θα επιλύουν ένα πρόβλημα σε περισσότερους από έναν επεξεργαστές, ώστε να επιτευχθεί η μείωση του χρόνου εκτέλεσης. Ένα παράδειγμα τέτοιου πρωτοκόλλου είναι αυτό της ανταλλαγής μηνυμάτων (MPI). Σκοπός της παρούσας διπλωματικής εργασίας είναι η τροποποίηση μιας υπάρχουσας εφαρμογή, που απαιτεί σημαντική υπολογιστική ισχύ, με σκοπό την εκμετάλλευση συστημάτων όπως αυτά που περιγράφηκαν προηγούμενα. Μέσα από αυτή την διαδικασία θα γίνει ανάλυση των πλεονεκτημάτων και των μειονεκτημάτων του παράλληλου προγραμματισμού. / The need to solve large problems and the development of internet technology, has resulted in the need to find more and more resources. This need led to the creation of structures collaborating systems, with a view to solving problems that require large computing power or storage of large amounts of data. The existence of such structures and central processing units with more than one processor, created protocols for the develop applications that will run and will solve a problem in more than one processor in order to achieve the reduction in execution time. An example of such a protocol is that of messaging (MPI). The purpose of this diploma thesis is to modify an existing application that requires significant computing power to exploit systems such as those described above. Through this process will analyze the advantages and disadvantages of parallel programming.
90

Διαχείριση πόρων σε δίκτυα πλέγματος , χρησιμοποιώντας το ενδιάμεσο λογισμικό gLite

Κρέτσης, Αριστοτέλης 27 April 2009 (has links)
Τα τελευταία χρόνια η ραγδαία αύξηση της υπολογιστικής ισχύος, των αποθηκευτικών μέσων καθώς και των τηλεπικοινωνιών έχει δημιουργήσει γόνιμο έδαφος για την ανάπτυξη πολύπλοκων, απαιτητικών εφαρμογών, τόσο στον χώρο της επιστημονικής έρευνας, όσο και στα πλαίσια της παραγωγής εμπορικών λύσεων. Ως αποτέλεσμα αυτού, πραγματοποιείται μετάβαση από το μοντέλο των μεμονωμένων διακριτών πόρων στο μοντέλο της συνεργασίας κατανεμημένων πόρων το οποίο υλοποιείται από την τεχνολογία πλέγματος (Grid Computing). Ένα πολύ σημαντικό θέμα που επηρεάζει την συνολική απόδοση των δικτύων πλέγματος είναι η χρονοδρομολόγηση των εργασιών που υποβάλλουν οι χρήστες στους διαθέσιμους πόρους του δικτύου. Στόχος της παρούσας διπλωματικής εργασίας ήταν η μελέτη της χρονοδρομολόγησης στα δίκτυα πλέγματος όχι μέσω προγραμμάτων προσομοίωσης αλλά χρησιμοποιώντας το ενδιάμεσο λογισμικό gLite. Βασικό αντικείμενο μελέτης ήταν η υπηρεσία Workload Management System (WMS) στην οποία υλοποιούνται οι αλγόριθμοι χρονοπρογραμματισμού που παρέχει το gLite. Στόχος ήταν η ανάλυση της λειτουργίας των δύο αλγορίθμων χρονοπρογραμματισμού που παρέχει το ενδιάμεσο λογισμικό και η κατανόηση τόσο της αρχιτεκτονικής της WMS υπηρεσίας, που είναι μια από τις πιο σημαντικές για την λειτουργία ολόκληρου του δικτύου, αλλά και του τρόπου υλοποίησης των δύο αλγορίθμων του gLite. Στην συνέχεια προσθέσαμε στην υπηρεσία WMS ένα νέο δίκαιο αλγόριθμο ανάθεσης εργασιών στους διαθέσιμους πόρους του δικτύου πλέγματος. Τέλος αναπτύξαμε ένα μικρής κλίμακας δίκτυο πλέγματος για την πειραματική αξιολόγηση του νέου αλγορίθμου και την σύγκριση του με τους δύο βασικούς αλγορίθμους του gLite. Τα αποτελέσματα δείχνουν ότι ο αλγόριθμος μας παρέχει καλύτερη αξιοποίηση των πόρων του δικτύου, μειώνοντας παράλληλα το μέσο χρόνο εκτέλεσης μιας εργασίας στο δίκτυο. / The emergence of high speed optical networks is making the vision of Grids a reality. Grids consist of geographically distributed and heterogeneous computational and storage resources that may belong to different administrative domains, but can be shared among users by establishing global resource management architecture. An important issue in the performance of Grids is the scheduling of application tasks to the available resources. The Grid environment is quite dynamic, with resource availability and load varying rapidly with time, and application tasks have very different characteristics and requirements. Scheduling is a key to the success of Grid Networks, since it determines the efficiency in the use of the resources and the QoS provided to the users. In this work we present our experiences from implementing and integrating a new job scheduling algorithm in the gLite Grid middleware and present experimental results that compare it to the existing gLite scheduling algorithms. It is the first time that gLite scheduling algorithms are put under test and compared with a new algorithm under the same conditions. We describe the problems that were encountered and solved, going from theory and simulations to practice and the actual implementation of our fair scheduling algorithm. In this work we also describe the steps one needs to follow in order to develop and test a new scheduling algorithm in gLite. We present the methodology followed and the testbed set up for the comparisons. Our research sheds light on some of the problems of the existing gLite scheduling algorithms and makes clear the need for the development of new.

Page generated in 0.1034 seconds