• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 108
  • 11
  • 11
  • 4
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 153
  • 153
  • 153
  • 153
  • 50
  • 46
  • 46
  • 24
  • 24
  • 23
  • 21
  • 20
  • 19
  • 17
  • 16
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Emerging Nanophotonic Applications Explored with Advanced Scientific Parallel Computing

Meng, Xiang January 2017 (has links)
The domain of nanoscale optical science and technology is a combination of the classical world of electromagnetics and the quantum mechanical regime of atoms and molecules. Recent advancements in fabrication technology allows the optical structures to be scaled down to nanoscale size or even to the atomic level, which are far smaller than the wavelength they are designed for. These nanostructures can have unique, controllable, and tunable optical properties and their interactions with quantum materials can have important near-field and far-field optical response. Undoubtedly, these optical properties can have many important applications, ranging from the efficient and tunable light sources, detectors, filters, modulators, high-speed all-optical switches; to the next-generation classical and quantum computation, and biophotonic medical sensors. This emerging research of nanoscience, known as nanophotonics, is a highly interdisciplinary field requiring expertise in materials science, physics, electrical engineering, and scientific computing, modeling and simulation. It has also become an important research field for investigating the science and engineering of light-matter interactions that take place on wavelength and subwavelength scales where the nature of the nanostructured matter controls the interactions. In addition, the fast advancements in the computing capabilities, such as parallel computing, also become as a critical element for investigating advanced nanophotonic devices. This role has taken on even greater urgency with the scale-down of device dimensions, and the design for these devices require extensive memory and extremely long core hours. Thus distributed computing platforms associated with parallel computing are required for faster designs processes. Scientific parallel computing constructs mathematical models and quantitative analysis techniques, and uses the computing machines to analyze and solve otherwise intractable scientific challenges. In particular, parallel computing are forms of computation operating on the principle that large problems can often be divided into smaller ones, which are then solved concurrently. In this dissertation, we report a series of new nanophotonic developments using the advanced parallel computing techniques. The applications include the structure optimizations at the nanoscale to control both the electromagnetic response of materials, and to manipulate nanoscale structures for enhanced field concentration, which enable breakthroughs in imaging, sensing systems (chapter 3 and 4) and improve the spatial-temporal resolutions of spectroscopies (chapter 5). We also report the investigations on the confinement study of optical-matter interactions at the quantum mechanical regime, where the size-dependent novel properties enhanced a wide range of technologies from the tunable and efficient light sources, detectors, to other nanophotonic elements with enhanced functionality (chapter 6 and 7).
22

Task-parallel extension of a data-parallel language

Macielinski, Damien D. 28 October 1994 (has links)
Two prevalent models of parallel programming are data parallelism and task parallelism. Data parallelism is the simultaneous application of a single operation to a data set. This model fits best with regular computations. Task parallelism is the simultaneous application of possibly different operations to possibly different data sets. This fits best with irregular computations. Efficient solution of some problems require both regular and irregular computations. Implementing efficient and portable parallel solutions to these problems requires a high-level language that can accommodate both task and data parallelism. We have extended the data-parallel language Dataparallel C to include task parallelism so that programmers may now use data and task parallelism within the same program. The extension permits the nesting of data-parallel constructs inside a task-parallel framework. We present a banded linear system to analyze the benefits of our language extensions. / Graduation date: 1995
23

IPPM : Interactive parallel program monitor

Brandis, Robert Craig 08 1900 (has links) (PDF)
M.S. / Computer Science & Engineering / The tasks associated with designing and. implementing parallel programs involve effectively partitioning the problem, defining an efficient. control strategy and mapping the design to a particular system. The task then becomes one of analyzing the program for correctness and stepwise refinement of its performance. New tools are needed to assist the programmer with these last two stages. Metrics and methods of instrumentation are needed to help with behavior analysis (debugging) and performance analysis. First, current tools and analysis methods are reviewed, and then a set of models is proposed for analyzing parallel programs. The design of IPPM, based on these models, is then presented. IPPM is an interactive, parallel program monitor for the Intel iPSC. It gives a post-mortem view of an iPSC program based on a script of events collected during execution. A user can observe changes in program state and synchronization, select statistics, interactively filter events and time critical sequences.
24

Automatic dynamic decomposition of programs on distributed memory machines

Doddapaneni, Srinivas P. January 1997 (has links)
No description available.
25

Optimizing data parallelism in applicative languages

Alahmadi, Marwan Ibrahim January 1990 (has links)
No description available.
26

Improved algorithms for some classical graph problems /

Chong Ka-wong. January 1996 (has links)
Thesis (Ph. D.)--University of Hong Kong, 1996. / Includes bibliographical references (leaf 82-85).
27

Machine-independent compiler optimizations for collective communication /

Weathersby, Wilbert D. January 1999 (has links)
Thesis (Ph. D.)--University of Washington, 1999. / Vita. Includes bibliographical references (p. 169-184).
28

Multi-stage polypeptides comparison improvements for a parallel protein interaction prediction engine /

North, Christopher January 1900 (has links)
Thesis (M.C.S.) - Carleton University, 2007. / Includes bibliographical references (p. 78-84). Also available in electronic format on the Internet.
29

Parallel programming using functional languages

Roe, Paul. January 1991 (has links)
Thesis (Ph.D.) -- University of Glasgow, 1991. / Print version also available. Mode of access : World Wide Web. System requirements : Adobe Acrobat reader required to view PDF document.
30

The design and implementation of a parallel prolog opcode-interpreter on a multiprocessor architecture /

Hakansson, Carolyn Ann, January 1987 (has links)
Thesis (M.S.)--Oregon Graduate Center, 1987.

Page generated in 0.0189 seconds