• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1132
  • 359
  • 132
  • 124
  • 119
  • 117
  • 43
  • 27
  • 24
  • 24
  • 19
  • 17
  • 12
  • 8
  • 7
  • Tagged with
  • 2571
  • 506
  • 479
  • 472
  • 449
  • 343
  • 289
  • 275
  • 263
  • 253
  • 238
  • 218
  • 214
  • 200
  • 175
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
121

Traitement et analyse de grands ensembles d'images médicales

Montagnat, Johan 20 December 2006 (has links) (PDF)
Non disponible
122

A DHT-Based Grid Resource Indexing and Discovery Scheme

Teo, Yong Meng, March, Verdi, Wang, Xianbing 01 1900 (has links)
This paper presents a DHT-based grid resource indexing and discovery (DGRID) approach. With DGRID, resource-information data is stored on its own administrative domain and each domain, represented by an index server, is virtualized to several nodes (virtual servers) subjected to the number of resource types it has. Then, all nodes are arranged as a structured overlay network or distributed hash table (DHT). Comparing to existing grid resource indexing and discovery schemes, the benefits of DGRID include improving the security of domains, increasing the availability of data, and eliminating stale data. / Singapore-MIT Alliance (SMA)
123

Towards Grid-Wide Modeling and Simulation

Xie, Yong, Teo, Yong Meng, Cai, W., Turner, S. J. 01 1900 (has links)
Modeling and simulation permeate all areas of business, science and engineering. With the increase in the scale and complexity of simulations, large amounts of computational resources are required, and collaborative model development is needed, as multiple parties could be involved in the development process. The Grid provides a platform for coordinated resource sharing and application development and execution. In this paper, we survey existing technologies in modeling and simulation, and we focus on interoperability and composability of simulation components for both simulation development and execution. We also present our recent work on an HLA-based simulation framework on the Grid, and discuss the issues to achieve composability. / Singapore-MIT Alliance (SMA)
124

Concept for Next Generation Phasor Measurement: A Low-Cost, Self-Contained, and Wireless Design

Miller, Brian Ray 01 December 2010 (has links)
Phasor measurement is a growth technology in the power grid industry. With new funding, grid reliability concerns, and power capacity margin motivating a smart grid transformation, phasor measurement and smart metering are taking center stage as the implementation methods for grid intelligence. This thesis proposes a novel concept for designing a next generation phasor measurement unit. The present generation phasor measurement unit relies upon venerable existing current and voltage transducer technology that is expensive, bulky, and not well suited to the modern age of digital and computerized control signals. Also, the rising proliferation of installed phasor measurement units will soon result in data overload and huge obligations for network bandwidth and processing centers. This brute-force approach is ill-advised. Forward thinking is required to foresee the future grid, its fundamental operation, and its sensor controller needs. A reasonably safe assumption is a future grid containing sensors numbering in the thousands or millions. This number of sensors cannot transmit raw data over the network without requiring enormous network capacity and data center processing power. This thesis proposes a novel concept—combining existing technologies such as improved current transducers and wireless precision time protocols to design a next generation phasor measurement unit. The unit is entirely self-contained. It requires no external connections due to inclusion of high performance transducers, processor, wireless radio, and even energy harvesting components. With easy, safe, and low cost installation, proliferation of thousands or millions of sensors becomes feasible. Also, with a scalable sensor network containing thousands or millions of parallel distributed processors, data reduction and processing within the network relieves the need for high bandwidth data transmission or supercomputing data centers.
125

Quality delaunay meshing of polyhedral volumes and surfaces

Ray, Tathagata, January 2006 (has links)
Thesis (Ph. D.)--Ohio State University, 2006. / Title from first page of PDF file. Includes bibliographical references (p. 137-143).
126

Grid and High-Performance Computing for Applied Bioinformatics

Andrade, Jorge January 2007 (has links)
The beginning of the twenty-first century has been characterized by an explosion of biological information. The avalanche of data grows daily and arises as a consequence of advances in the fields of molecular biology and genomics and proteomics. The challenge for nowadays biologist lies in the de-codification of this huge and complex data, in order to achieve a better understanding of how our genes shape who we are, how our genome evolved, and how we function. Without the annotation and data mining, the information provided by for example high throughput genomic sequencing projects is not very useful. Bioinformatics is the application of computer science and technology to the management and analysis of biological data, in an effort to address biological questions. The work presented in this thesis has focused on the use of Grid and High Performance Computing for solving computationally expensive bioinformatics tasks, where, due to the very large amount of available data and the complexity of the tasks, new solutions are required for efficient data analysis and interpretation. Three major research topics are addressed; First, the use of grids for distributing the execution of sequence based proteomic analysis, its application in optimal epitope selection and in a proteome-wide effort to map the linear epitopes in the human proteome. Second, the application of grid technology in genetic association studies, which enabled the analysis of thousand of simulated genotypes, and finally the development and application of a economic based model for grid-job scheduling and resource administration. The applications of the grid based technology developed in the present investigation, results in successfully tagging and linking chromosomes regions in Alzheimer disease, proteome-wide mapping of the linear epitopes, and the development of a Market-Based Resource Allocation in Grid for Scientific Applications. / QC 20100622
127

Study on tribology analysis of chemical mechanical polishing

Chen, Chin-cheng 27 August 2007 (has links)
During the CMP process, a wafer is rotated and pressed face down against a rotating polishing pad. Polishing slurry is delivered on the top of pad continuously and forms a thin lubricating film between the wafer and the pad. In this study, a three-dimensional slurry flow model based on a generalized Reynolds equation is developed, which can apply to a rough pad with the compressibility of the pad, and the multi-grid method is used to reduce computational time. According to the force and moment balance equations, the tilted angles and the slurry film thickness can be evaluated. When the pad surface is rough, the squeeze term differentiated by time should be considered in this model due to the rotation of the pad. The influences of applied load, pad speed, wafer speed, pad compressibility, and surface roughness pattern on the tilted angles and the slurry film thickness are investigated. Results show that the variation of the tilted angles becomes more significant for the anisotropic than that for the isotropic during the rotation of the pad. And the slurry film thickness at the center of the wafer increases as applied load decreases or pad speed increases or wafer speed decreases or the compressibility of the pad increases.
128

Automated Fault Location In Smart Distribution Systems

Lotfifard, Saeed 2011 August 1900 (has links)
Fault location in distribution systems is a critical component of outage management and service restoration, which directly impacts feeder reliability and quality of the electricity supply. Improving fault location methods supports the Department of Energy (DOE) “Grid 2030” initiatives for grid modernization by improving reliability indices of the network. Improving customer average interruption duration index (CAIDI) and system average interruption duration index (SAIDI) are direct advantages of utilizing a suitable fault location method. As distribution systems are gradually evolving into smart distribution systems, application of more accurate fault location methods based on gathered data from various Intelligent Electronic Devices (IEDs) installed along the feeders is quite feasible. How this may be done and what is the needed methodology to come to such solution is raised and then systematically answered. To reach this goal, the following tasks are carried out: 1) Existing fault location methods in distribution systems are surveyed and their strength and caveats are studied. 2) Characteristics of IEDs in distribution systems are studied and their impacts on fault location method selection and implementation are detailed. 3) A systematic approach for selecting optimal fault location method is proposed and implemented to pinpoint the most promising algorithms for a given set of application requirements. 4) An enhanced fault location method based on voltage sag data gathered from IEDs along the feeder is developed. The method solves the problem of multiple fault location estimations and produces more robust results. 5) An optimal IED placement approach for the enhanced fault location method is developed and practical considerations for its implementation are detailed.
129

Beyond music sharing: an evaluation of peer-to-peer data dissemination techniques in large scientific collaborations

Al Kiswany, Samer 05 1900 (has links)
The avalanche of data from scientific instruments and the ensuing interest from geographically distributed users to analyze and interpret it accentuates the need for efficient data dissemination. An optimal data distribution scheme will find the delicate balance between conflicting requirements of minimizing transfer times, minimizing the impact on the network, and uniformly distributing load among participants. We identify several data distribution techniques, some successfully employed by today's peer-to-peer networks: staging, data partitioning, orthogonal bandwidth exploitation, and combinations of the above. We use simulations to explore the performance of these techniques in contexts similar to those used by today's data-centric scientific collaborations and derive several recommendations for efficient data dissemination. Our experimental results show that the peer-to-peer solutions that offer load balancing and good fault tolerance properties and have embedded participation incentives lead to unjustified costs in today's scientific data collaborations deployed on over-provisioned network cores. However, as user communities grow and these deployments scale, peer-to-peer data delivery mechanisms will likely outperform other techniques.
130

Workflow scheduling for service oriented cloud computing

Fida, Adnan 13 August 2008
Service Orientation (SO) and grid computing are two computing paradigms that when put together using Internet technologies promise to provide a scalable yet flexible computing platform for a diverse set of distributed computing applications. This practice gives rise to the notion of a computing cloud that addresses some previous limitations of interoperability, resource sharing and utilization within distributed computing. <p>In such a Service Oriented Computing Cloud (SOCC), applications are formed by composing a set of services together. In addition, hierarchical service layers are also possible where general purpose services at lower layers are composed to deliver more domain specific services at the higher layer. In general an SOCC is a horizontally scalable computing platform that offers its resources as services in a standardized fashion. <p>Workflow based applications are a suitable target for SOCC where workflow tasks are executed via service calls within the cloud. One or more workflows can be deployed over an SOCC and their execution requires scheduling of services to workflow tasks as the task become ready following their interdependencies. <p>In this thesis heuristics based scheduling policies are evaluated for scheduling workflows over a collection of services offered by the SOCC. Various execution scenarios and workflow characteristics are considered to understand the implication of the heuristic based workflow scheduling.

Page generated in 0.0293 seconds