71 |
Network coding for WDM all-optical networksManley, Eric D. January 2009 (has links)
Thesis (Ph.D.)--University of Nebraska-Lincoln, 2009. / Title from title screen (site viewed October 15, 2009). PDF text: xx, 160 p. : ill. (some col.) ; 1 Mb. UMI publication number: AAT 3360160. Includes bibliographical references. Also available in microfilm and microfiche formats.
|
72 |
Superposition coded modulation /Tong, Jun. January 2009 (has links) (PDF)
Thesis (Ph.D.)--City University of Hong Kong, 2009. / "Submitted to Department of Electronic Engineering in partial fulfillment of the requirements for the degree of Doctor of Philosophy." Includes bibliographical references (leaves [142]-152)
|
73 |
Non-uniform filter banks and context modeling for image coding /Ho, Man-wing. January 2001 (has links)
Thesis (M. Phil.)--University of Hong Kong, 2002. / Includes bibliographical references (leaves 90-94).
|
74 |
Non-uniform filter banks and context modeling for image coding何文泳, Ho, Man-wing. January 2001 (has links)
published_or_final_version / Computer Science and Information Systems / Master / Master of Philosophy
|
75 |
REACTION TIME FOR NUMERICAL CODING AND NAMING OF NUMERALSWindes, James Dudley, 1937- January 1966 (has links)
No description available.
|
76 |
Investigating the combined appearance model for statistical modelling of facial images.Allen, Nicholas Peter Legh. January 2007 (has links)
The combined appearance model is a linear, parameterized and flexible model which has emerged as a powerful tool for representing, interpreting, and synthesizing the complex, non-rigid structure of the human face. The inherent strength of this model arises from the utilization of a representative training set which provides a-priori knowledge of the allowable appearance variation of the face. The model was introduced by Edwards et al in 1998 as part of the Active Appearance Model framework, a template alignment algorithm which used the model to automatically locate deformable objects within images. Since this debut, the model has been utilized within a plethora of applications relating to facial image processing. In essence, the ap pearance model combines individual statistical models of shape and texture variation in order to produce a single model of correlations between both shape and texture. In the context of facial modelling, this approach produces a model which is flexible in that it can accommodate the range of variation found in the face, specific in that it is restricted to only facial instances, and compact in that a new facial instance may be synthesized using a small set of parameters. It is additionally this compactness which makes it a candidate for model based video coding. Methods used in the past to model faces are reviewed and the capabilities of the statistical model in general are investigated. Various approaches to building the intermediate linear Point Distribution Models (PDMs) and grey-level models are outlined and an approach decided upon for implementation. The respective statistical models for the Informatics and Modelling (IMM) and Extended Multi-Model Verification for Teleservices and Secu- rities (XM2VTS) facial databases are built using MATLAB in an approach incorporating Procrustes Analysis, Affine Transform Warping and Principal Components Analysis. The MATLAB implementation's integrity was validated against a similar approach encoun tered in literature and found to produce results within 0.59%, 0.69% and 0.69% of those published for the shape, texture and combined models respectively. The models are consequently assessed with regard to their flexibility, specificity and compactness. The results demonstrate the model's ability to be successfully constrained to the synthesis of "legal" faces, to successfully parameterize and re-synthesize new unseen images from outside the training sets and to significantly reduce the high dimensionality of input facial images to produce a powerful, compact model. / Thesis (M.Sc.Eng.)-University of KwaZulu-Natal, Durban, 2007
|
77 |
On sets of odd type and caps in Galois geometries of order fourPacker, S. January 1995 (has links)
No description available.
|
78 |
Methodologies and tools for computation offloading on heterogeneous multicoresBhagwat, Ashwini 18 May 2009 (has links)
Frequency scaling in traditional computing systems has hit the power wall and multicore computing is here to stay. Unlike homogeneous multicores which have uniform architecture and instruction set across cores, heterogenous multicores have differentially capable cores to provide optimal performance for specialized functionality. However, this heterogeneity also translates into difficult programming models, and extracting its potential is not trivial. The Cell Broadband Engine by the Sony Toshiba IBM(STI) consortium was amongst the first heterogenous multicore systems with a single Power Processing Unit(PPU) and 8 Synergistic Processor Units (SPUs).
We address the issue of porting an existing sequential C/C++ codebase on to the Cell BE through compiler driven program analysis and profiling. Until parallel programming models evolve, the "interim" solution to performance involves speeding up legacy code by offloading computationally intense parts of a sequential thread to the co-processor; thus using it as an accelerator. Unique architectural characteristics of an accelerator makes this problem quite challenging. On the Cell, these characteristics include limited local store of the SPU, high latency of data transfer between PPU and SPU, lack of branch prediction unit, limited SIMDizability, expensive scalar code etc. In particular, the designers of the Cell have opted for software controlled memory on its SPUs to reduce power consumption and to give programmers more control over the predictability of latency. The lack of a hardware cache on the SPU can create performance bottlenecks because any data that needs to be brought in to the SPU must be brought in using a DMA call. The need for supporting a software controlled cache is thus evident for irregular memory accesses on the SPU. For such a cache to result in improved performance, the amount of time spent in book-keeping and tracking at run-time should be minimal. Traditional algorithms like LRU, when implemented in software incur overheads on every cache hit because appropriate data structures need to be updated. Such overheads are on off critical path for traditional hardware cache but on the critical path for a software controlled cache. Thus there is a need for better management of "data movement" for the code that is offloaded on to the SPU.
This thesis addresses the "code partitioning" problem as well as the "data movement" problem. We present
GLIMPSES - a compiler driven profiling tool that analyzes existing C/C++ code for its suitability for porting to the Cell, and presents its results in an interactive visualizer.
Software Controlled Cache - an improved eviction policy that exploits information gleaned from memory traces generated through offline profiling. The trace is analyzed to provide guidance for a run-time state machine within the cache manager; resulting in reduced run-time overhead and better performance. The design tradeoffs and several pros and cons of this approach are brought forth as well. It is shown that with just about the right amount of runtime book-keeping and decision making, one can get to the difficult solution space of the right balance to achieve high performance.
|
79 |
Reliable communication for the noncoherent additive white Gaussian channelAlles, Martin C. A January 1990 (has links)
Typescript. / Thesis (Ph. D.)--University of Hawaii at Manoa, 1990. / Includes bibliographical references (leaves 214-215) / Microfiche. / xv, 215 leaves, bound ill. 29 cm
|
80 |
Turbo and LDPC coding for the AWGN and space-time channel /Guidi, Andrew Mark. Unknown Date (has links)
The main focus of this thesis is the investigation of a number of different space-time coding scenarios based on predominately the application of turbo codes and low density parity check (LDPC) codes to a multi-antenna system. Both codes structures make use of the BPSK stacking construction that readily applies binary linear codes to the space-time channel while also providing a check on the suitability of the resulting code in order to achieve maximum diversity advantage. The turbo-like codes investigated are based upon the application of a parallel concatenated scheme to directly map the data and parity bits generated by the encoder to one of three possible antennas outputs. It is further highlighted in this case how the interleaver plays a crucial role in determining overall performance as this determines whether the resulting space-time codes achieve maximum diversity advantage or otherwise. Performance results are presented for a number of different constituent codes and interleaver design. The LDPC space-time codes considered herein again are based on satisfying the BPSK stacking construction to ensure full diversity advantage is achieved. The code design is based on a recursive application of the Shur complement in order to devise block based codes that have a resulting parity check matrix that is relatively sparse. A number of various code constructions that satisfy the conditions are then simulated in order to determine performance in both slow and fast fading channel conditions. This thesis also investigates the use of certain non-linear codes termed “chaotic codes” and their application as constituent codes within a parallel concatenated (turbo-like) coding scheme. Performance of such codes is shown to be readily analysed via the use extrinsic information transfer (EXIT) techniques. The modified codes are simulated over an AWGN channel using BPSK modulation for a number of different block lengths. / Thesis (PhDTelecommunications)--University of South Australia, 2006.
|
Page generated in 0.0793 seconds