• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1930
  • 582
  • 307
  • 237
  • 150
  • 48
  • 38
  • 34
  • 25
  • 23
  • 21
  • 21
  • 15
  • 15
  • 12
  • Tagged with
  • 4265
  • 1169
  • 1042
  • 973
  • 612
  • 603
  • 599
  • 594
  • 478
  • 457
  • 421
  • 408
  • 369
  • 325
  • 318
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
191

Knowledge support for parallel performance data mining /

Huck, Kevin A., January 2009 (has links)
Typescript. Includes vita and abstract. Includes bibliographical references (leaves 218-231). Also available online in Scholars' Bank; and in ProQuest, free to University of Oregon users.
192

Efficient solutions for the load distribution problem /

Yau, Cho-ki, Joe. January 1999 (has links)
Thesis (M. Phil.)--University of Hong Kong, 1999. / Includes bibliographical references (leaves 85-92).
193

Solving combinatorial based chemical engineering problems via parallel evolutionary approaches /

Wong, King Hei. January 2009 (has links)
Includes bibliographical references (p. 80-88).
194

Productivity with performance property/behavior-based automated composition of parallel programs from self-describing components /

Mahmood, Nasim, January 1900 (has links)
Thesis (Ph. D.)--University of Texas at Austin, 2007. / Vita. Includes bibliographical references.
195

Parallel process placement

Handler, Caroline January 1989 (has links)
This thesis investigates methods of automatic allocation of processes to available processors in a given network configuration. The research described covers the investigation of various algorithms for optimal process allocation. Among those researched were an algorithm which used a branch and bound technique, an algorithm based on graph theory, and an heuristic algorithm involving cluster analysis. These have been implemented and tested in conjunction with the gathering of performance statistics during program execution, for use in improving subsequent allocations. The system has been implemented on a network of loosely-coupled microcomputers using multi-port serial communication links to simulate a transputer network. The concurrent programming language occam has been implemented, replacing the explicit process allocation constructs with an automatic placement algorithm. This enables the source code to be completely separated from hardware considerations
196

Evaluation of a Three Degree of Freedom Revolute-Spherical-Revolute Joint Configuration Parallel Manipulator

Feck, Joseph J. 24 September 2013 (has links)
No description available.
197

Shared Memory Abstractions for Heterogeneous Multicore Processors

Schneider, Scott 21 January 2011 (has links)
We are now seeing diminishing returns from classic single-core processor designs, yet the number of transistors available for a processor is still increasing. Processor architects are therefore experimenting with a variety of multicore processor designs. Heterogeneous multicore processors with Explicitly Managed Memory (EMM) hierarchies are one such experimental design which has the potential for high performance, but at the cost of great programmer effort. EMM processors have cores that are divorced from the normal memory hierarchy, thus the onus is on the programmer to manage locality and parallelism. This dissertation presents the Cellgen source-to-source compiler which moves some of this complexity back into the compiler. Cellgen offers a directive-based programming model with semantics similar to OpenMP for the Cell Broadband Engine, a general-purpose processor with EMM. The compiler implicitly handles locality and parallelism, schedules memory transfers for data parallel regions of code, and provides performance predictions which can be leveraged to make scheduling decisions. We compare this approach to using a software cache, to a different programming model which is task based with explicit data transfers, and to programming the Cell directly using the native SDK. We also present a case study which uses the Cellgen compiler in a comparison across multiple kinds of multicore architectures: heterogeneous, homogeneous and radically data-parallel graphics processors. / Ph. D.
198

Personalized Computer Architecture as Contextual Partitioning for Speech Recognition

Kent, Christopher Grant 22 January 2010 (has links)
Computing is entering an era of hundreds to thousands of processing elements per chip, yet no known parallelism form scales to that degree. To address this problem, we investigate the foundation of a computer architecture where processing elements and memory are contextually partitioned based upon facets of a user's life. Such Contextual Partitioning (CP), the situational handling of inputs, employs a method for allocating resources, novel from approaches used in today's architectures. Instead of focusing components on mutually exclusive parts of a task, as in Thread Level Parallelism, CP assigns different physical components to different versions of the same task, defining versions by contextual distinctions in device usage. Thus, application data is processed differently based on the situation of the user. Further, partitions may be user specific, leading to personalized architectures. Our focus is mobile devices, which are, or can be, personalized to one owner. Our investigation is centered on leveraging CP for accurate and real-time speech recognition on mobile devices, scalable to large vocabularies, a highly desired application for future user interfaces. By contextually partitioning a vocabulary, training partitions as separate acoustic models with SPHINX, we demonstrate a maximum error reduction of 61% compared to a unified approach. CP also allows for systems robust to changes in vocabulary, requiring up to 97% less training when updating old vocabulary entries with new words, and incurring fewer errors from the replacement. Finally, CP has the potential to scale nearly linearly with increasing core counts, offering architectures effective with future processor designs. / Master of Science
199

Assessing the Suitability of Python as a Language for Parallel Programming

Kohli, Manav S 01 January 2016 (has links)
With diminishing gains in processing power from successive generations of hardware development, there is a new focus on using advances in computer science and parallel programming to build faster, more efficient software. As computers trend toward including multiple and multicore processors, parallel computing serves as a promising option for optimizing the next generation of software applications. However, models for implementing parallel programs remain highly opaque due to their reliance on languages such as Fortran, C, and C++. In this paper I investigate Python an option for implementing parallel programming techniques in application development. I analyze the efficiency and accessibility of MPI for Python and IPython Parallel packages by calculating in parallel using a Monte Carlo simulation and comparing their speeds to the sequential calculation. While MPI for Python offers the core functionality of MPI and C-like syntax in Python, IPython Parallel's architecture provides a truly unique model.
200

Spatially developing flows with localized forcing

Hunt, Robert Edward January 1995 (has links)
No description available.

Page generated in 0.4783 seconds