• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 105
  • 65
  • 26
  • 16
  • 15
  • 4
  • 4
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 274
  • 58
  • 46
  • 37
  • 31
  • 30
  • 28
  • 27
  • 25
  • 25
  • 21
  • 20
  • 19
  • 19
  • 17
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Quantum theory from the perspective of general probabilistic theories

Al-Safi, Sabri Walid January 2015 (has links)
This thesis explores various perspectives on quantum phenomena, and how our understanding of these phenomena is informed by the study of general probabilistic theories. Particular attention is given to quantum nonlocality, and its interaction with areas of physical and mathematical interest such as entropy, reversible dynamics, information-based games and the idea of negative probability. We begin with a review of non-signaling distributions and convex operational theories, including “black box” descriptions of experiments and the mathematics of convex vector spaces. In Chapter 3 we derive various classical and quantum-like quasiprobabilistic representations of arbitrary non-signaling distributions. Previously, results in which the density operator is allowed to become non-positive [1] have proved useful in derivations of quantum theory from physical requirements [2]; we derive a dual result in which the measurement operators instead are allowed to become non-positive, and show that the generation of any non-signaling distribution is possible using a fixed separable state with negligible correlation. We also derive two distinct “quasi-local” models of non-signaling correlations. Chapter 4 investigates non-local games, in particular the game known as Information Causality. By analysing the probability of success in this game, we prove the conjectured tightness of a bound given in [3] concerning how well entanglement allows us to perform the task of random access coding, and introduce a quadratic bias bound which seems to capture a great deal of information about the set of quantum-achievable correlations. By reformulating Information Causality in terms of entropies, we find that a sensible measure of entropy precludes many general probabilistic theories whose non-locality is stronger than that of quantum theory. Chapter 5 explores the role that reversible transitivity (the principle that any two pure states are joined by a reversible transformation) plays as a characteristic feature of quantum theory. It has previously been shown that in Boxworld, the theory allowing for the full set of non-signaling correlations, any reversible transformation on a restricted class of composite systems is merely a composition of relabellings of measurement choices and outcomes, and permutations of subsystems [4]. We develop a tabular description of Boxworld states and effects first introduced in [5], and use this to extend this reversibility result to any composite Boxworld system in which none of the subsystems are classical.
22

Cache Characterization and Performance Studies Using Locality Surfaces

Sorenson, Elizabeth Schreiner 14 July 2005 (has links) (PDF)
Today's processors commonly use caches to help overcome the disparity between processor and main memory speeds. Due to the principle of locality, most of the processor's requests for data are satisfied by the fast cache memory, resulting in a signficant performance improvement. Methods for evaluating workloads and caches in terms of locality are valuable for cache design. In this dissertation, we present a locality surface which displays both temporal and spatial locality on one three-dimensional graph. We provide a solid, mathematical description of locality data and equations for visualization. We then use the locality surface to examine the locality of a variety of workloads from the SPEC CPU 2000 benchmark suite. These surfaces contain a number of features that represent sequential runs, loops, temporal locality, striding, and other patterns from the input trace. The locality surface can also be used to evaluate methodologies that involve locality. For example, we evaluate six synthetic trace generation methods and find that none of them accurately reproduce an original trace's locality. We then combine a mathematical description of caches with our locality definition to create cache characterization surfaces. These new surfaces visually relate how references with varying degrees of locality function in a given cache. We examine how varying the cache size, line size, and associativity affect a cache's response to different types of locality. We formally prove that the locality surface can predict the miss rate in some types of caches. Our locality surface matches well with cache simulation results, particularly caches with large associativities. We can qualitatively choose prudent values for cache and line size. Further, the locality surface can predict the miss rate with 100% accuracy for some fully associative caches and with some error for set associative caches. One drawback to the locality surface is the time intensity of the stack-based algorithm. We provide a new parallel algorithm that reduces the computation time significantly. With this improvement, the locality surface becomes a viable and valuable tool for characterizing workloads and caches, predicting cache simulation results, and evaluating any procedure involving locality.
23

A Novel Cache Migration Scheme in Network-on-Chip Devices

Nafziger, Jonathan W. 06 December 2010 (has links)
No description available.
24

Methods for Creating and Exploiting Data Locality

Wallin, Dan January 2006 (has links)
The gap between processor speed and memory latency has led to the use of caches in the memory systems of modern computers. Programs must use the caches efficiently and exploit data locality for maximum performance. Multiprocessors, built from many processing units, are becoming commonplace not only in large servers but also in smaller systems such as personal computers. Multiprocessors require careful data locality optimizations since accesses from other processors can lead to invalidations and false sharing cache misses. This thesis explores hardware and software approaches for creating and exploiting temporal and spatial locality in multiprocessors. We propose the capacity prefetching technique, which efficiently reduces the number of cache misses but avoids false sharing by distinguishing between cache lines involved in communication from non-communicating cache lines at run-time. Prefetching techniques often lead to increased coherence and data traffic. The new bundling technique avoids one of these drawbacks and reduces the coherence traffic in multiprocessor prefetchers. This is especially important in snoop-based systems where the coherence bandwidth is a scarce resource. Most of the studies have been performed on advanced scientific algorithms. This thesis demonstrates that a cc-NUMA multiprocessor, with hardware data migration and replication optimizations, efficiently exploits the temporal locality in such codes. We further present a method of parallelizing a multigrid Gauss-Seidel partial differential equation solver, which creates temporal locality at the expense of increased communication. Our conclusion is that on modern chip multiprocessors, it is more important to optimize algorithms for data locality than to avoid communication, since communication can take place using a shared cache.
25

Optimizing Sparse Matrix-Matrix Multiplication on a Heterogeneous CPU-GPU Platform

Wu, Xiaolong 16 December 2015 (has links)
Sparse Matrix-Matrix multiplication (SpMM) is a fundamental operation over irregular data, which is widely used in graph algorithms, such as finding minimum spanning trees and shortest paths. In this work, we present a hybrid CPU and GPU-based parallel SpMM algorithm to improve the performance of SpMM. First, we improve data locality by element-wise multiplication. Second, we utilize the ordered property of row indices for partial sorting instead of full sorting of all triples according to row and column indices. Finally, through a hybrid CPU-GPU approach using two level pipelining technique, our algorithm is able to better exploit a heterogeneous system. Compared with the state-of-the-art SpMM methods in cuSPARSE and CUSP libraries, our approach achieves an average of 1.6x and 2.9x speedup separately on the nine representative matrices from University of Florida sparse matrix collection.
26

High performance Monte Carlo computation for finance risk data analysis

Zhao, Yu January 2013 (has links)
Finance risk management has been playing an increasingly important role in the finance sector, to analyse finance data and to prevent any potential crisis. It has been widely recognised that Value at Risk (VaR) is an effective method for finance risk management and evaluation. This thesis conducts a comprehensive review on a number of VaR methods and discusses in depth their strengths and limitations. Among these VaR methods, Monte Carlo simulation and analysis has proven to be the most accurate VaR method in finance risk evaluation due to its strong modelling capabilities. However, one major challenge in Monte Carlo analysis is its high computing complexity of O(n²). To speed up the computation in Monte Carlo analysis, this thesis parallelises Monte Carlo using the MapReduce model, which has become a major software programming model in support of data intensive applications. MapReduce consists of two functions - Map and Reduce. The Map function segments a large data set into small data chunks and distribute these data chunks among a number of computers for processing in parallel with a Mapper processing a data chunk on a computing node. The Reduce function collects the results generated by these Map nodes (Mappers) and generates an output. The parallel Monte Carlo is evaluated initially in a small scale MapReduce experimental environment, and subsequently evaluated in a large scale simulation environment. Both experimental and simulation results show that the MapReduce based parallel Monte Carlo is greatly faster than the sequential Monte Carlo in computation, and the accuracy level is maintained as well. In data intensive applications, moving huge volumes of data among the computing nodes could incur high overhead in communication. To address this issue, this thesis further considers data locality in the MapReduce based parallel Monte Carlo, and evaluates the impacts of data locality on the performance in computation.
27

Idiomatic Root Merge in Modern Hebrew blends

Pham, Mike January 2011 (has links)
In this paper I use the Distributional Morphology framework and semantic Locality Constraints proposed by Arad (2003) to look at category assignments of blends in Modern Hebrew, as well as blends, compounds and idioms in English where relevant. Bat-El (1996) provides an explicit phonological analysis of Modern Hebrew blends, and argues against any morphological process at play in blend formation. I argue, however, that blends and compounds must be accounted for within morphology due to category assignments. I first demonstrate that blends are unquestionably formed by blending fully inflected words rather than roots, and then subsequently reject an analysis that accounts for weakened Locality Constraints by proposing the formation of a new root. Instead, I propose a hypothesis of Idiomatic Root Merge where a root can be an n-place predicate that selects at least an XP sister and a category head. This proposal also entails that there is a structural difference between two surface-similar phrases that have respectively literal and idiomatic meanings.
28

Uma arquitetura usando trackers hierárquicos para localidade em redes P2P gerenciadas. / An architecture for P2P locality in managed networks using hierarchical trackers.

Miers, Charles Christian 29 November 2012 (has links)
As redes Peer-to-Peer (P2P) tornaram-se nos últimos anos um método atrativo para a distribuição de conteúdo multimídia através da Internet. Muitos fatores contribuíram para esse sucesso, mas os custos baixos de distribuição e a escalabilidade inerente das redes P2P, na qual os consumidores de conteúdo também são fontes potenciais, estão entre os mais proeminentes. Entretanto, a efetividade e desempenho de várias redes P2P populares (como as que usam o protocolo BitTorrent) é consideravelmente dependente de quão bem o tracker (que é o elemento responsável pela identificação e gerência dos participantes de uma rede P2P do tipo BitTorrent) seleciona os peers que irão fornecer o conteúdo. Os peers escolhidos pelo tracker irão afetar diretamente a percepção do usuário sobre o desempenho do serviço e o uso dos recursos de rede. Além disso, as redes P2P usualmente não tem percepção de localidade, resultando em uma utilização não otimizada dos recursos de rede. A fim de abordar estas questões, nesta tese é apresentada uma proposta inédita de uma arquitetura de trackers hierárquicos para localidade orientada a redes P2P gerenciadas e baseadas no protocolo BitTorrent. Na tese é identificado, através de experimentação, que o uso da arquitetura proposta conduz a uma melhora significativa no desempenho da rede sem comprometer a experiência do usuário. Dentre as melhorias, a principal é o controle do tráfego de dados trafegado entre os peers através da escolha dos peers feita pelo tracker com base em informações da rede e regras de negócio. Essa melhoria permite que a rede possa ser gerenciada de maneira pró-ativa e o dinamismo de adaptação da rede às condições adversas possa ser obtido por meio de políticas de configuração que são acionadas por gatilhos pré-determinados. Nesta tese são usados gatilhos baseados em tempo (data/horário) para exemplificar a abordagem de mudança de políticas programável. / Peer-to-Peer (P2P) networks have become an attractive method for distributing multimedia content over the Internet in the last years. Several factors contributed to this success, but the low distribution costs and the inherent scalability of P2P networks, in which content consumers are also potential sources, are among of the most prominent. However, the effectiveness and performance of several popular P2P networks (such as those that use the BitTorrent protocol) is considerably dependent on how well the tracker (which is the element responsible for the identification and management of participants in BitTorrent P2P networks) selects peers that will provide the content. The peers chosen by the tracker will directly affect the user\'s perception about the service performance and proper deployment of the of network resources. In addition, P2P networks usually have no sense of locality, resulting in a non-optimal utilization of network resources. In order to address these issues, this thesis presents a novel hierarchical tracker architecture using P2P locality for managed networks based on a modified version of the BitTorrent protocol. This thesis shows, through experimentation, that the adoption of the proposed architecture leads to significant network efficiency improvements without compromising end-user experience. Among the improvements, the principal is the data flow control between the peers through the choice of peers given by tracker based on network information and business rules. This enhancement allows to manage the network proactively and to perform dynamic adaptation of the network due to adverse conditions. The network control can be achieved by setting policies that are driven by predetermined triggers. This thesis uses time-based triggers (date / time) to exemplify the approach of programmable policy change.
29

Uma arquitetura usando trackers hierárquicos para localidade em redes P2P gerenciadas. / An architecture for P2P locality in managed networks using hierarchical trackers.

Charles Christian Miers 29 November 2012 (has links)
As redes Peer-to-Peer (P2P) tornaram-se nos últimos anos um método atrativo para a distribuição de conteúdo multimídia através da Internet. Muitos fatores contribuíram para esse sucesso, mas os custos baixos de distribuição e a escalabilidade inerente das redes P2P, na qual os consumidores de conteúdo também são fontes potenciais, estão entre os mais proeminentes. Entretanto, a efetividade e desempenho de várias redes P2P populares (como as que usam o protocolo BitTorrent) é consideravelmente dependente de quão bem o tracker (que é o elemento responsável pela identificação e gerência dos participantes de uma rede P2P do tipo BitTorrent) seleciona os peers que irão fornecer o conteúdo. Os peers escolhidos pelo tracker irão afetar diretamente a percepção do usuário sobre o desempenho do serviço e o uso dos recursos de rede. Além disso, as redes P2P usualmente não tem percepção de localidade, resultando em uma utilização não otimizada dos recursos de rede. A fim de abordar estas questões, nesta tese é apresentada uma proposta inédita de uma arquitetura de trackers hierárquicos para localidade orientada a redes P2P gerenciadas e baseadas no protocolo BitTorrent. Na tese é identificado, através de experimentação, que o uso da arquitetura proposta conduz a uma melhora significativa no desempenho da rede sem comprometer a experiência do usuário. Dentre as melhorias, a principal é o controle do tráfego de dados trafegado entre os peers através da escolha dos peers feita pelo tracker com base em informações da rede e regras de negócio. Essa melhoria permite que a rede possa ser gerenciada de maneira pró-ativa e o dinamismo de adaptação da rede às condições adversas possa ser obtido por meio de políticas de configuração que são acionadas por gatilhos pré-determinados. Nesta tese são usados gatilhos baseados em tempo (data/horário) para exemplificar a abordagem de mudança de políticas programável. / Peer-to-Peer (P2P) networks have become an attractive method for distributing multimedia content over the Internet in the last years. Several factors contributed to this success, but the low distribution costs and the inherent scalability of P2P networks, in which content consumers are also potential sources, are among of the most prominent. However, the effectiveness and performance of several popular P2P networks (such as those that use the BitTorrent protocol) is considerably dependent on how well the tracker (which is the element responsible for the identification and management of participants in BitTorrent P2P networks) selects peers that will provide the content. The peers chosen by the tracker will directly affect the user\'s perception about the service performance and proper deployment of the of network resources. In addition, P2P networks usually have no sense of locality, resulting in a non-optimal utilization of network resources. In order to address these issues, this thesis presents a novel hierarchical tracker architecture using P2P locality for managed networks based on a modified version of the BitTorrent protocol. This thesis shows, through experimentation, that the adoption of the proposed architecture leads to significant network efficiency improvements without compromising end-user experience. Among the improvements, the principal is the data flow control between the peers through the choice of peers given by tracker based on network information and business rules. This enhancement allows to manage the network proactively and to perform dynamic adaptation of the network due to adverse conditions. The network control can be achieved by setting policies that are driven by predetermined triggers. This thesis uses time-based triggers (date / time) to exemplify the approach of programmable policy change.
30

A Mathematical Foundation For Locality

January 2014 (has links)
This work is motivated by two non-intuitive predictions of Quantum Mechanics: non-locality and contextuality. Non-locality is a phenomenon whereby interactions between spatially separated objects appear to be occurring faster than the speed of light. Contextuality is a phenomenon whereby the outcome of a measurement cannot be interpreted as the revelation of an intrinsic fixed property of the system being measured, but instead necessarily depends on the configuration of the measurement apparatus. Quantum Mechanics predicts non-local behavior in certain types of experiments collectively known as Bell tests. However, ruling out all possible alternative local theories is a subtle and demanding task. In this work, we lay out a mathematically-rigorous framework for analyzing Bell experiments. Using this framework, we derive the famous Clauser-Horne-Shimony-Holt (CHSH) inequality, an important constraint that is obeyed by all local theories and violated by Quantum Mechanics. We further demonstrate how to analyze the data of a CHSH experiment without assuming that successive experimental trials are independent and/or identically distributed. We also derive the Clauser-Horne (CH74) inequality, an inequality that is more well-suited for realistic Bell experiments using photons. We demonstrate a robust method for statistically analyzing the data of a CH74 experiment, and show how to calculate exact p-values for this analysis, improving on the previously-best-known (loose) upper bounds obtained from Hoeffding-style inequalities. The work concludes with an exploration of contextuality. The Kochen-Specker theorem -- a result demonstrating the contextual nature of Quantum Mechanics -- is applied to resolve a conjecture in Domain Theory regarding the spectral order on quantum states. / acase@tulane.edu

Page generated in 0.1301 seconds