• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 81
  • 76
  • 14
  • 8
  • 7
  • 3
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 230
  • 230
  • 67
  • 51
  • 50
  • 41
  • 38
  • 36
  • 35
  • 34
  • 31
  • 28
  • 23
  • 22
  • 22
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Community driven data grids

Scholl, Tobias. Unknown Date (has links)
Techn. Univ., Diss., 2010--München.
12

Einsatz von Risikomanagement bei der Steuerung von Grid-Systemen : eine Analyse von Versicherungen anhand einer simulierten Grid-Ökonomie

Streitberger, Werner January 2009 (has links)
Bayreuth, Univ., Diss., 2009.
13

LHCb data management on the computing grid

Smith, Andrew Cameron January 2009 (has links)
The LHCb detector is one of the four experiments being built to harness the proton-proton collisions provided by the Large Hadron Collider (LHC) at the European Organisation for Nuclear Research (CERN). The data rate expected, when the LHC experiments are fully operational, eclipses that of any previous scientific experiments and has motivated the adoption of a grid computing paradigm to store and process the data. Managing PetaBytes of data in a distributed environment provides a rich set of challenges related to scalability, reliability and performance. This thesis will present the data management requirements for executing the workload of the LHCb collab- oration. We present the systems designed that support all aspects of the grid data management for LHCb, from data transfer, to data integrity, and efficient data access. The distributed computing environment is inherently unstable and much focus has been made on providing systems that are ro- bust and resilient to observed failures.
14

Traitement et analyse de grands ensembles d'images médicales

Montagnat, Johan 20 December 2006 (has links) (PDF)
Non disponible
15

Towards Grid-Wide Modeling and Simulation

Xie, Yong, Teo, Yong Meng, Cai, W., Turner, S. J. 01 1900 (has links)
Modeling and simulation permeate all areas of business, science and engineering. With the increase in the scale and complexity of simulations, large amounts of computational resources are required, and collaborative model development is needed, as multiple parties could be involved in the development process. The Grid provides a platform for coordinated resource sharing and application development and execution. In this paper, we survey existing technologies in modeling and simulation, and we focus on interoperability and composability of simulation components for both simulation development and execution. We also present our recent work on an HLA-based simulation framework on the Grid, and discuss the issues to achieve composability. / Singapore-MIT Alliance (SMA)
16

Grid and High-Performance Computing for Applied Bioinformatics

Andrade, Jorge January 2007 (has links)
The beginning of the twenty-first century has been characterized by an explosion of biological information. The avalanche of data grows daily and arises as a consequence of advances in the fields of molecular biology and genomics and proteomics. The challenge for nowadays biologist lies in the de-codification of this huge and complex data, in order to achieve a better understanding of how our genes shape who we are, how our genome evolved, and how we function. Without the annotation and data mining, the information provided by for example high throughput genomic sequencing projects is not very useful. Bioinformatics is the application of computer science and technology to the management and analysis of biological data, in an effort to address biological questions. The work presented in this thesis has focused on the use of Grid and High Performance Computing for solving computationally expensive bioinformatics tasks, where, due to the very large amount of available data and the complexity of the tasks, new solutions are required for efficient data analysis and interpretation. Three major research topics are addressed; First, the use of grids for distributing the execution of sequence based proteomic analysis, its application in optimal epitope selection and in a proteome-wide effort to map the linear epitopes in the human proteome. Second, the application of grid technology in genetic association studies, which enabled the analysis of thousand of simulated genotypes, and finally the development and application of a economic based model for grid-job scheduling and resource administration. The applications of the grid based technology developed in the present investigation, results in successfully tagging and linking chromosomes regions in Alzheimer disease, proteome-wide mapping of the linear epitopes, and the development of a Market-Based Resource Allocation in Grid for Scientific Applications. / QC 20100622
17

Beyond music sharing: an evaluation of peer-to-peer data dissemination techniques in large scientific collaborations

Al Kiswany, Samer 05 1900 (has links)
The avalanche of data from scientific instruments and the ensuing interest from geographically distributed users to analyze and interpret it accentuates the need for efficient data dissemination. An optimal data distribution scheme will find the delicate balance between conflicting requirements of minimizing transfer times, minimizing the impact on the network, and uniformly distributing load among participants. We identify several data distribution techniques, some successfully employed by today's peer-to-peer networks: staging, data partitioning, orthogonal bandwidth exploitation, and combinations of the above. We use simulations to explore the performance of these techniques in contexts similar to those used by today's data-centric scientific collaborations and derive several recommendations for efficient data dissemination. Our experimental results show that the peer-to-peer solutions that offer load balancing and good fault tolerance properties and have embedded participation incentives lead to unjustified costs in today's scientific data collaborations deployed on over-provisioned network cores. However, as user communities grow and these deployments scale, peer-to-peer data delivery mechanisms will likely outperform other techniques.
18

Beyond music sharing: an evaluation of peer-to-peer data dissemination techniques in large scientific collaborations

Al Kiswany, Samer 05 1900 (has links)
The avalanche of data from scientific instruments and the ensuing interest from geographically distributed users to analyze and interpret it accentuates the need for efficient data dissemination. An optimal data distribution scheme will find the delicate balance between conflicting requirements of minimizing transfer times, minimizing the impact on the network, and uniformly distributing load among participants. We identify several data distribution techniques, some successfully employed by today's peer-to-peer networks: staging, data partitioning, orthogonal bandwidth exploitation, and combinations of the above. We use simulations to explore the performance of these techniques in contexts similar to those used by today's data-centric scientific collaborations and derive several recommendations for efficient data dissemination. Our experimental results show that the peer-to-peer solutions that offer load balancing and good fault tolerance properties and have embedded participation incentives lead to unjustified costs in today's scientific data collaborations deployed on over-provisioned network cores. However, as user communities grow and these deployments scale, peer-to-peer data delivery mechanisms will likely outperform other techniques.
19

An e-Science Approach to Genetic Analysis of Quantitative Traits

Jayawardena, Mahen January 2010 (has links)
Many important traits in plants, animals and humans are quantitative, and most such traits are generally believed to be affected by multiple genetic loci. Standard computational tools for mapping of quantitative traits (i.e. for finding Quantitative Trait Loci, QTL, in the genome) use linear regression models for relating the observed phenotypes to the genetic composition of individuals in an experimental population. Using these tools to simultaneously search for multiple QTL is computationally demanding. The main reason for this is the complex optimization landscape for the multidimensional global optimization problems that must be solved. This thesis describes parallel algorithms, implementations and tools for simultaneous mapping of several QTL. These new computational tools enable genetic analysis exploiting new classes of multidimensional statistical models, potentially resulting in interesting results in genetics. We first describe how the standard, brute-force algorithm for global optimization in QTL analysis is parallelized and implemented on a grid system. Then, we also present a parallelized version of the more elaborate global optimization algorithm DIRECT and show how this can be efficiently deployed and used on grid systems and other loosely-coupled architectures. The parallel DIRECT scheme is further developed to exploit both coarse-grained parallelism in grid systems or clusters as well as fine-grained, tightly-coupled parallelism in multi-core nodes. The results show that excellent speedup and performance can be archived on grid systems and clusters, even when using a tightly-coupled algorithm such as DIRECT. Finally, we provide two distinctly different front-ends for our code. One is a grid portal providing a graphical front-end suitable for novice users and standard forms of QTL analysis. The other is a prototype of an R-based grid-enabled problem solving environment. Both of these front-ends can, after some further refinement, be utilized by geneticists for performing multidimensional genetic analysis of quantitative traits on a regular basis. / eSSENCE
20

A framework for network RTK data processing based on grid computing

Yin, Deming January 2009 (has links)
Real-Time Kinematic (RTK) positioning is a technique used to provide precise positioning services at centimetre accuracy level in the context of Global Navigation Satellite Systems (GNSS). While a Network-based RTK (N-RTK) system involves multiple continuously operating reference stations (CORS), the simplest form of a NRTK system is a single-base RTK. In Australia there are several NRTK services operating in different states and over 1000 single-base RTK systems to support precise positioning applications for surveying, mining, agriculture, and civil construction in regional areas. Additionally, future generation GNSS constellations, including modernised GPS, Galileo, GLONASS, and Compass, with multiple frequencies have been either developed or will become fully operational in the next decade. A trend of future development of RTK systems is to make use of various isolated operating network and single-base RTK systems and multiple GNSS constellations for extended service coverage and improved performance. Several computational challenges have been identified for future NRTK services including: • Multiple GNSS constellations and multiple frequencies • Large scale, wide area NRTK services with a network of networks • Complex computation algorithms and processes • Greater part of positioning processes shifting from user end to network centre with the ability to cope with hundreds of simultaneous users’ requests (reverse RTK) There are two major requirements for NRTK data processing based on the four challenges faced by future NRTK systems, expandable computing power and scalable data sharing/transferring capability. This research explores new approaches to address these future NRTK challenges and requirements using the Grid Computing facility, in particular for large data processing burdens and complex computation algorithms. A Grid Computing based NRTK framework is proposed in this research, which is a layered framework consisting of: 1) Client layer with the form of Grid portal; 2) Service layer; 3) Execution layer. The user’s request is passed through these layers, and scheduled to different Grid nodes in the network infrastructure. A proof-of-concept demonstration for the proposed framework is performed in a five-node Grid environment at QUT and also Grid Australia. The Networked Transport of RTCM via Internet Protocol (Ntrip) open source software is adopted to download real-time RTCM data from multiple reference stations through the Internet, followed by job scheduling and simplified RTK computing. The system performance has been analysed and the results have preliminarily demonstrated the concepts and functionality of the new NRTK framework based on Grid Computing, whilst some aspects of the performance of the system are yet to be improved in future work.

Page generated in 0.0851 seconds