• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 116
  • 28
  • 19
  • 8
  • 4
  • 4
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 250
  • 68
  • 50
  • 49
  • 40
  • 38
  • 33
  • 31
  • 23
  • 22
  • 20
  • 19
  • 18
  • 17
  • 17
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
141

A Survey of Scalable Real-Time Architectures for Data Acquisition Systems

DeBenedetto, Louis J. 10 1900 (has links)
International Telemetering Conference Proceedings / October 25-28, 1999 / Riviera Hotel and Convention Center, Las Vegas, Nevada / Today’s large-scale signal processing systems impose massive bandwidth requirements on both internal and external communication systems. Most often, these bandwidth requirements are met by scalable input/output architectures built around high-performance, standards-based technology. Several such technologies are available and are in common use as internal and/or external communication mechanisms. This paper provides an overview of some of the more common scalable technologies used for internal and external communications in real-time data acquisition systems. With respect to internal communications mechanisms this paper focuses on three ANSI-standard switched fabric technologies: RACEway (ANSI/VITA 5-1994), SKYchannel (ANSI/VITA 10-1995) and Myrinet (ANSI/VITA 26-1998). The discussion then turns to how Fibre Channel, HiPPI, and ATM are used to provide scalable external communications in real-time systems. Finally, glimpse of how these technologies are evolving to meet tomorrow’s requirements is provided.
142

Efficient approaches to simulating individual-based cell population models

Harvey, Daniel Gordon January 2013 (has links)
Computational modelling of populations of cells has been applied to further understanding in a range of biological fields, from cell sorting to tumour development. The ability to analyse the emergent population-level effects of variation at the cellular and subcellular level makes it a powerful approach. As more detailed models have been proposed, the demand for computational power has increased. While developments in microchip technology continue to increase the power of individual compute units available to the research community, the use of parallel computing offers an immediate increase in available computing power. To make full use of parallel computing technology it is necessary to develop specialised algorithms. To that end, this thesis is concerned with the development, implementation and application of a novel parallel algorithm for the simulation of an off-lattice individual-based model of a population of cells. We first use the Message Passing Interface to develop a parallel algorithm for the overlapping spheres model which we implement in the Chaste software library. We draw on approaches for parallelising molecular dynamics simulations to develop a spatial decomposition approach to dividing data between processors. By using functions designed for saving and loading the state of simulations, our implementation allows for the parallel simulation of all subcellular models implemented in Chaste, as well as cell-cell interactions that depend on any of the cell state variables. Our implementation allows for faithful replication of model cells that migrate between processors during a simulation. We validate our parallel implementation by comparing results with the extensively tested serial implementation in Chaste. While the use of the Message Passing Interface means that our algorithm may be used on shared- and distributed-memory systems, we find that parallel performance is limited due to high communication costs. To address this we apply a series of optimisations that improve the scaling of our algorithm both in terms of compute time and memory consumption for given benchmark problems. To demonstrate an example application of our work to a biological problem, we extend our algorithm to enable parallel simulation of the Subcellular Element Model (S.A. Sandersius and T.J. Newman. Phys. Biol., 5:015002, 2008). By considering subcellular biomechanical heterogeneity we study the impact of a stiffer nuclear region within cells on the initiation of buckling of a compressed epithelial layer. The optimised parallel algorithm decreases computation time for a single simulation in this study by an order of magnitude, reducing computation time from over a week to a single day.
143

Reliable peer to peer grid middleware

Leslie, Matthew John January 2011 (has links)
Grid computing systems are suffering from reliability and scalability problems caused by their reliance on centralised middleware. In this thesis, we argue that peer to peer middleware could help alleviate these problems. We show that peer to peer techniques can be used to provide reliable storage systems, which can be used as the basis for peer to peer grid middleware. We examine and develop new methods of providing reliable peer to peer storage, giving a new algorithm for this purpose, and assessing its performance through a combination of analysis and simulation. We then give an architecture for a peer to peer grid information system based on this work. Performance evaluation of this information system shows that it improves scalability when compared to the original centralised system, and that it withstands the failure of participant nodes without a significant reduction in quality of service. New contributions include dynamic replication, a new method for maintaining reliable storage in a Distributed Hash Table, which we show allows for the creation of more reliable, higher performance systems with lower bandwidth usage than current techniques. A new analysis of the reliability of distributed storage systems is also presented, which shows for the first time that replica placement has a significant effect on reliability. A simulation of the performance of distributed storage systems provides for the first time a quantitative performance comparison between different placement patterns. Finally, we show how these reliable storage techniques can be applied to grid computing systems, giving a new architecture for a peer to peer grid information service for the SAM-Grid system. We present a thorough performance evaluation of a prototype implementation of this architecture. Many of these contributions have been published at peer reviewed conferences.
144

Multimedia data dissemination in opportunistic systems / Diffusion multimédia de données dans des systèmes opportunistes

Klaghstan, Merza 01 December 2016 (has links)
Les réseaux opportunistes sont des réseaux mobiles qui se forment spontanément et de manière dynamique grâce à un ensemble d'utilisateurs itinérants dont le nombre et le déplacement ne sont pas prévisibles. En conséquence, la topologie et la densité de tels réseaux évoluent sans cesse. La diffusion de bout-en-bout d'informations, dans ce contexte, est incertaine du fait de la forte instabilité des liens réseaux point à point entre les utilisateurs. Les travaux qui en ont envisagé l'usage visent pour la plupart des applications impliquant l'envoi de message de petite taille. Cependant, la transmission de données volumineuses telles que les vidéos représente une alternative très pertinente aux réseaux d'infrastructure, en cas d'absence de réseau, de coût important ou pour éviter la censure d'un contenu. La diffusion des informations de grande taille en général et de vidéos en particulier dans des réseaux oppnets constitue un challenge important. En effet, permettre, dans un contexte réseau très incertain et instable, au destinataire d’une vidéo de prendre connaissance au plus vite du contenu de celle-ci, avec la meilleure qualité de lecture possible et en encombrant le moins possible le réseau reste un problème encore très largement ouvert. Dans cette thèse, nous proposons un nouveau mécanisme de diffusion de vidéos dans un réseau opportuniste de faible densité, visant à améliorer le temps d'acheminement de la vidéo tout en réduisant le délai de lecture à destination. La solution proposée se base sur le choix d'encoder la vidéo en utilisant l'encodage SVC, grâce auquel la vidéo se décline en un ensemble de couches interdépendantes (layers), chacune améliorant la précédente soit en terme de résolution, soit en terme de densité, soit en terme de perception visuelle. Notre solution se décline en trois contributions. La première consiste à proposer une adaptation du mécanisme de diffusion Spray-and-Wait, avec comme unités de diffusion, les couches produites par SVC. Les couches sont ainsi diffusées avec un niveau de redondance propre à chacune, adapté à leur degré d'importance dans la diffusion de la vidéo. Notre seconde contribution consiste à améliorer le mécanisme précédent en prenant en compte une granularité plus fine et adaptative en fonction de l'évolution de la topologie du réseau. Cette amélioration a la particularité de ne pas engendrer de coût de partitionnement, les couches vidéos dans l'encodage SVC étant naturellement déclinées en petites unités (NALU) à base desquelles l'unité de transfert sera calculée. Enfin, la troisième contribution de cette thèse consiste à proposer un mécanisme hybride de complétion des couches vidéos arrivées incomplètes à destination. Cette méthode se caractérise par le fait d'être initiée par le destinataire. Elle combine un protocole de demande des parties manquantes aux usagers proches dans le réseau et des techniques de complétion de vidéo à base d’opérations sur les frames constituant la vidéo. / Opportunistic networks are human-centric mobile ad-hoc networks, in which neither the topology nor the participating nodes are known in advance. Routing is dynamically planned following the store-carry-and-forward paradigm, which takes advantage of people mobility. This widens the range of communication and supports indirect end-to-end data delivery. But due to individuals’ mobility, OppNets are characterized by frequent communication disruptions and uncertain data delivery. Hence, these networks are mostly used for exchanging small messages like disaster alarms or traffic notifications. Other scenarios that require the exchange of larger data are still challenging due to the characteristics of this kind of networks. However, there are still multimedia sharing scenarios where a user might need switching to an ad-hoc alternative. Examples are the cases of 1) absence of infrastructural networks in far rural areas, 2) high costs due limited data volumes or 3) undesirable censorship by third parties while exchanging sensitive content. Consequently, we target in this thesis a video dissemination scheme in OppNets. For the video delivery problem in the sparse opportunistic networks, we propose a solution that encloses three contributions. The first one is given by granulating the videos at the source node into smaller parts, and associating them with unequal redundancy degrees. This is technically based on using the Scalable Video Coding (SVC), which encodes a video into several layers of unequal importance for viewing the content at different quality levels. Layers are routed using the Spray-and-Wait routing protocol, with different redundancy factors for the different layers depending on their importance degree. In this context as well, a video viewing QoE metric is proposed, which takes the values of the perceived video quality, delivery delay and network overhead into consideration, and on a scalable basis. Second, we take advantage of the small units of the Network Abstraction Layer (NAL), which compose SVC layers. NAL units are packetized together under specific size constraints to optimize granularity. Packets sizes are tuned in an adaptive way, with regard to the dynamic network conditions. Each node is enabled to record a history of environmental information regarding the contacts and forwarding opportunities, and use this history to predict future opportunities and optimize the sizes accordingly. Lastly, the receiver node is pushed into action by reacting to missing data parts in a composite backward loss concealment mechanism. So, the receiver asks first for the missing data from other nodes in the network in the form of request-response. Then, since the transmission is concerned with video content, video frame loss error concealment techniques are also exploited at the receiver side. Consequently, we propose to combine the two techniques in the loss concealment mechanism, which is enabled then to react to missing data parts.
145

Efficient NTRU Implementations

O'Rourke, Colleen Marie 30 April 2002 (has links)
In this paper, new software and hardware designs for the NTRU Public Key Cryptosystem are proposed. The first design attempts to improve NTRU's polynomial multiplication through applying techniques from the Chinese Remainder Theorem (CRT) to the convolution algorithm. Although the application of CRT shows promise for the creation of the inverse polynomials in the setup procedure, it does not provide any benefits to the procedures that are critical to the performance of NTRU (public key creation, encryption, and decryption). This research has identified that this is due to the small coefficients of one of the operands, which can be a common misunderstanding. The second design focuses on improving the performance of the polynomial multiplications within NTRU's key creation, encryption, and decryption procedures through hardware. This design exploits the inherent parallelism within a polynomial multiplication to make scalability possible. The advantage scalability provides is that it allows the user to customize the design for low and high power applications. In addition, the support for arbitrary precision allows the user to meet the desired security level. The third design utilizes the Montgomery Multiplication algorithm to develop an unified architecture that can perform a modular multiplication for GF(p) and GF(2^k) and a polynomial multiplication for NTRU. The unified design only requires an additional 10 gates in order for the Montgomery Multiplier core to compute the polynomial multiplication for NTRU. However, this added support for NTRU presents some restrictions on the supported lengths of the moduli and on the chosen value for the residue for the GF(p) and GF(2^k) cases. Despite these restrictions, this unified architecture is now capable of supporting public key operations for the majority of Public-Key Cryptosystems.
146

Logical Superposition Coded Modulation for Wireless Video Multicasting

Ho, James Ching-Chih January 2009 (has links)
This thesis documents the design of logical superposition coded (SPC) modulation for implementation in wireless video multicast systems, to tackle the issues caused by multi-user channel diversity, one of the legacy problems due to the nature of wireless video multicasting. The framework generates a logical SPC modulated signal by mapping successively refinable information bits into a single signal constellation with modifications in the MAC-layer software. The transmitted logical SPC signals not only manipulatively mimic SPC signals generated by the superposition of multiple modulated signals in the conventional hardware-based SPC modulation, but also yield comparable performance gains when provided with the knowledge of information bits dependencies and receiver channel distributions. At the receiving end, the proposed approach only requires simple modifications in the MAC layer software, which demonstrates full decoding compatibility with the conventional multi-stage signal-interference cancellation (SIC) approach involving additional hardware devices. Generalized formulations for symbol error rate (SER) are derived for performance evaluations and comparisons with the conventional hardware-based approach.
147

Compressive Sensing for 3D Data Processing Tasks: Applications, Models and Algorithms

January 2012 (has links)
Compressive sensing (CS) is a novel sampling methodology representing a paradigm shift from conventional data acquisition schemes. The theory of compressive sensing ensures that under suitable conditions compressible signals or images can be reconstructed from far fewer samples or measurements than what are required by the Nyquist rate. So far in the literature, most works on CS concentrate on one-dimensional or two-dimensional data. However, besides involving far more data, three-dimensional (3D) data processing does have particularities that require the development of new techniques in order to make successful transitions from theoretical feasibilities to practical capacities. This thesis studies several issues arising from the applications of the CS methodology to some 3D image processing tasks. Two specific applications are hyperspectral imaging and video compression where 3D images are either directly unmixed or recovered as a whole from CS samples. The main issues include CS decoding models, preprocessing techniques and reconstruction algorithms, as well as CS encoding matrices in the case of video compression. Our investigation involves three major parts. (1) Total variation (TV) regularization plays a central role in the decoding models studied in this thesis. To solve such models, we propose an efficient scheme to implement the classic augmented Lagrangian multiplier method and study its convergence properties. The resulting Matlab package TVAL3 is used to solve several models. Computational results show that, thanks to its low per-iteration complexity, the proposed algorithm is capable of handling realistic 3D image processing tasks. (2) Hyperspectral image processing typically demands heavy computational resources due to an enormous amount of data involved. We investigate low-complexity procedures to unmix, sometimes blindly, CS compressed hyperspectral data to directly obtain material signatures and their abundance fractions, bypassing the high-complexity task of reconstructing the image cube itself. (3) To overcome the "cliff effect" suffered by current video coding schemes, we explore a compressive video sampling framework to improve scalability with respect to channel capacities. We propose and study a novel multi-resolution CS encoding matrix, and a decoding model with a TV-DCT regularization function. Extensive numerical results are presented, obtained from experiments that use not only synthetic data, but also real data measured by hardware. The results establish feasibility and robustness, to various extent, of the proposed 3D data processing schemes, models and algorithms. There still remain many challenges to be further resolved in each area, but hopefully the progress made in this thesis will represent a useful first step towards meeting these challenges in the future.
148

Logical Superposition Coded Modulation for Wireless Video Multicasting

Ho, James Ching-Chih January 2009 (has links)
This thesis documents the design of logical superposition coded (SPC) modulation for implementation in wireless video multicast systems, to tackle the issues caused by multi-user channel diversity, one of the legacy problems due to the nature of wireless video multicasting. The framework generates a logical SPC modulated signal by mapping successively refinable information bits into a single signal constellation with modifications in the MAC-layer software. The transmitted logical SPC signals not only manipulatively mimic SPC signals generated by the superposition of multiple modulated signals in the conventional hardware-based SPC modulation, but also yield comparable performance gains when provided with the knowledge of information bits dependencies and receiver channel distributions. At the receiving end, the proposed approach only requires simple modifications in the MAC layer software, which demonstrates full decoding compatibility with the conventional multi-stage signal-interference cancellation (SIC) approach involving additional hardware devices. Generalized formulations for symbol error rate (SER) are derived for performance evaluations and comparisons with the conventional hardware-based approach.
149

Robust Transmission Of 3d Models

Bici, Mehmet Oguz 01 November 2010 (has links) (PDF)
In this thesis, robust transmission of 3D models represented by static or time consistent animated meshes is studied from the aspects of scalable coding, multiple description coding (MDC) and error resilient coding. First, three methods for MDC of static meshes are proposed which are based on multiple description scalar quantization, partitioning wavelet trees and optimal protection of scalable bitstream by forward error correction (FEC) respectively. For each method, optimizations and tools to decrease complexity are presented. The FEC based MDC method is also extended as a method for packet loss resilient transmission followed by in-depth analysis of performance comparison with state of the art techniques, which pointed significant improvement. Next, three methods for MDC of animated meshes are proposed which are based on layer duplication and partitioning of the set of vertices of a scalable coded animated mesh by spatial or temporal subsampling where each set is encoded separately to generate independently decodable bitstreams. The proposed MDC methods can achieve varying redundancy allocations by including a number of encoded spatial or temporal layers from the other description. The algorithms are evaluated with redundancy-rate-distortion curves and per-frame reconstruction analysis. Then for layered predictive compression of animated meshes, three novel prediction structures are proposed and integrated into a state of the art layered predictive coder. The proposed structures are based on weighted spatial/temporal prediction and angular relations of triangles between current and previous frames. The experimental results show that compared to state of the art scalable predictive coder, up to 30% bitrate reductions can be achieved with the combination of proposed prediction schemes depending on the content and quantization level. Finally, optimal quality scalability support is proposed for the state of the art scalable predictive animated mesh coding structure, which only supports resolution scalability. Two methods based on arranging the bitplane order with respect to encoding or decoding order are proposed together with a novel trellis based optimization framework. Possible simplifications are provided to achieve tradeoff between compression performance and complexity. Experimental results show that the optimization framework achieves quality scalability with significantly better compression performance than state of the art without optimization.
150

Multi-tree Monte Carlo methods for fast, scalable machine learning

Holmes, Michael P. 09 January 2009 (has links)
As modern applications of machine learning and data mining are forced to deal with ever more massive quantities of data, practitioners quickly run into difficulty with the scalability of even the most basic and fundamental methods. We propose to provide scalability through a marriage between classical, empirical-style Monte Carlo approximation and deterministic multi-tree techniques. This union entails a critical compromise: losing determinism in order to gain speed. In the face of large-scale data, such a compromise is arguably often not only the right but the only choice. We refer to this new approximation methodology as Multi-Tree Monte Carlo. In particular, we have developed the following fast approximation methods: 1. Fast training for kernel conditional density estimation, showing speedups as high as 10⁵ on up to 1 million points. 2. Fast training for general kernel estimators (kernel density estimation, kernel regression, etc.), showing speedups as high as 10⁶ on tens of millions of points. 3. Fast singular value decomposition, showing speedups as high as 10⁵ on matrices containing billions of entries. The level of acceleration we have shown represents improvement over the prior state of the art by several orders of magnitude. Such improvement entails a qualitative shift, a commoditization, that opens doors to new applications and methods that were previously invisible, outside the realm of practicality. Further, we show how these particular approximation methods can be unified in a Multi-Tree Monte Carlo meta-algorithm which lends itself as scaffolding to the further development of new fast approximation methods. Thus, our contribution includes not just the particular algorithms we have derived but also the Multi-Tree Monte Carlo methodological framework, which we hope will lead to many more fast algorithms that can provide the kind of scalability we have shown here to other important methods from machine learning and related fields.

Page generated in 0.0498 seconds