• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 3
  • 2
  • 1
  • Tagged with
  • 42
  • 8
  • 6
  • 6
  • 5
  • 5
  • 5
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Types for quantum computing

Duncan, Ross January 2006 (has links)
No description available.
12

Practical search algorithms and Boolean circuits for quantum computers

Younes, Ahmed January 2004 (has links)
No description available.
13

Dynamic resource management of network-on-chip platforms for multi-stream video processing

Mendis, Hashan Roshantha January 2017 (has links)
This thesis considers resource management in the context of parallel multiple video stream decoding, on multicore/many-core platforms. Such platforms have tens or hundreds of on-chip processing elements which are connected via a Network-on-Chip (NoC). Inefficient task allocation configurations can negatively affect the communication cost and resource contention in the platform, leading to predictability and performance issues. Efficient resource management for large-scale complex workloads is considered a challenging research problem; especially when applications such as video streaming and decoding have dynamic and unpredictable workload characteristics. For these type of applications, runtime heuristic-based task mapping techniques are required. As the application and platform size increase, decentralised resource management techniques are more desirable to overcome the reliability and performance bottlenecks in centralised management. In this work, several heuristic-based runtime resource management techniques, targeting real-time video decoding workloads are proposed. Firstly, two admission control approaches are proposed; one fully deterministic and highly predictable; the other is heuristic-based, which balances predictability and performance. Secondly, a pair of runtime task mapping schemes are presented, which make use of limited known application properties, communication cost and blocking-aware heuristics. Combined with the proposed deterministic admission controller, these techniques can provide strict timing guarantees for hard real-time streams whilst improving resource usage. The third contribution in this thesis is a distributed, bio-inspired, low-overhead, task re-allocation technique, which is used to further improve the timeliness and workload distribution of admitted soft real-time streams. Finally, this thesis explores parallelisation and resource management issues, surrounding soft real-time video streams that have been encoded using complex encoding tools and modern codecs such as High Efficiency Video Coding (HEVC). Properties of real streams and decoding trace data are analysed, to statistically model and generate synthetic HEVC video decoding workloads. These workloads are shown to have complex and varying task dependency structures and resource requirements. To address these challenges, two novel runtime task clustering and mapping techniques for Tile-parallel HEVC decoding are proposed. These strategies consider the workload communication to computation ratio and stream-specific characteristics to balance predictability improvement and communication energy reduction. Lastly, several task to memory controller port assignment schemes are explored to alleviate performance bottlenecks, resulting from memory traffic contention.
14

Atomic and Optical Realizations of Cluster Quantum Computation

Joo, Jaewoo January 2007 (has links)
No description available.
15

Quantum Computing With Macroscopic Heralding

Metz, Jeremy January 2007 (has links)
No description available.
16

Fully automated transformation of hardware-agnostic, data-parallel programs for host-driven executions on GPUs

Guo, Jing January 2012 (has links)
This thesis explores the feasibility and performance gains of a fully integrated and automatic approach to generating GPU programs from a high-level and completely hardware-agnostic abstraction. Over the past decade, Graphic Processing Units (GPUs) have become increasingly popular because of their massive computing power and attractive performance/price ratios. Various high-level programming models have further driven the widespread use of GPUs for computation and data intensive general-purpose applications. In the literature, orders of magnitude speedups against single- or multi-core CPUs have been reported. Despite such advancements, developers still shoulder the burden of exploiting complex low-level hardware details to achieve optimal performance. Therefore, it is of great interests and benefits to have an even higher level of programming abstraction. To this end, we base our research on a functional array programming language, which supports both implicit memory management and high-level dataparallel operations. Within this context, we identify several key challenges that must be overcome to achieve competitive performance: mapping the data-parallel operations efficiently onto GPU’s massive parallelism, managing and minimising CPU-GPU data communications, optimising GPU memory access efficiency and overcoming the data copying problem inherent to the functional setting. Compilation techniques addressing these challenges have been proposed and implemented in the Single Assignment C (SAC) compiler framework, which allows the automatic generation of GPU programs from very high-level abstractions. Experimental results have shown that, for a set of representative parallel applications, our compiler-generated codes can achieve a level of performance that is (on average) one order of magnitude higher than the hand-written sequential counterparts. For several dense linear algebra kernels, the performance is comparable to or
17

Assessing the impact of processor design decisions on simulation based verification complexity using formal modeling with experiments at instruction set architecture level

Yuan, Fangfang January 2012 (has links)
The Instruction Set Architecture (ISA) describes the key functionalities of a processor design and is the most comprehensible format for enabling humans to understand the structure of the entire processor design. This thesis first introduces the construction of a generic ISA formal model with mathematical notations rather than programming languages, and demonstrates the extensions towards specific ISA designs. The stepwise refinement modeling technique gives rise to the hierarchically structured model, which eases the overall comprehensibility of the ISA and reduces the effort required for modeling similar designs. The ISA models serve as self-consistent, complete, and unambiguous specifications for coding, while helping engineers explore different design options beforehand. In the design phase, a selection of features is available to architects in order for the design to be trimmed towards a particular optimization target, e.g. low power consumption or fast computation, which can be assessed before implementation. However, taking verification into consideration, there is to my knowledge no way to estimate the difficulty of verifying a design before coding it. There needs to be a platform and a metric, from which both functional and non-functional properties can be quantitatively represented and then compared before implementation. Hence, this thesis secondly pro- poses a metric, based on the formally reasoned extension of the generic ISA models, as an estimator of some non-functional property, i.e. the verification complexity for achieving verification goals. The main claim of this thesis is that the verification complexity in simulation-based verification can be accurately retrieved from a hierarchically constructed ISA formal model in which the functionalities are fully specified with the correctness preserved. The modeling structure allows relative comparisons at a reasonably high level of abstraction brought by the hierarchically constructed formalization. The analysis on the experimental ISA emulator assesses the quality of the metric and concludes the applicability of the proposed metric.
18

High level modelling and design of a low powered event processor

Chen, Yuan January 2009 (has links)
With the fast development of semiconductor technology, more and more Intellectual Property (IP cores) can be integrated into one chip under the Globally Asynchronous and Locally Synchronous (GALS) architecture. Power becomes the main restriction of the System-on-Chip (SOC) performance especially when the chip is used in a portable device. Many low power technologies have been proposed and studied for IP core's design. However, there is a shortage of system level power management schemes (policies) for the GALS architecture. In particular, the area of using Dynamic Power Management (DPM) to optimize SOC power dissipation under latency restriction ains relatively unexplored. This thesis describes the work of modelling and design of an asynchronous event coprocessor to control the operations of an IP core in the GALS architecture.
19

Continuous variables and quantum computation

Wagner, Robert Christian January 2011 (has links)
This thesis addresses ideas and problems in continuous-variable quan- turn computation and information. Beginning with a physically mo- tivated answer to the question "when can it be said that a physical system is performing a computation?", I make practical investiga- tions into computational problems. I define and describe universal continuous-variable quantum computation (CVQC) in terms of con- ventional encodings into physical systems and discuss algorithms. I then look at the classical information content of some continuous- variable quantum states and how they can be used for a communi- cation scheme. With my collaborators I show how to encode and implement universal classical continuous-variable computation in mi- crowave circuitry. Analogously I then go on to demonstrate universal CVQC in the micromaser experiment. My investigations into realistic computation and information schemes has led me to the conclusion that taking advantage of the natural com- putation found in physical systems is of the utmost importance for lasting progress to be made in computer technology. Having a single "universal" device which can do everything you might ever want it to do given enough time is an out-dated view of computation and special- ist devices are becoming increasingly more important. Investigating unconventional physics for its computational potential, therefore, is a rich source of potential computer technology which will be of everyday importance within our lifetimes.
20

Gigahertz clocked point-to-point and multi-user quantum key distribution systems

Fernandez-Marmol, Veronica January 2006 (has links)
No description available.

Page generated in 0.0143 seconds