661 |
Low-Density Parity-Check Codes with Erasures and PuncturingHa, Jeongseok Ha 01 December 2003 (has links)
In this thesis, we extend applications of Low-Density Parity-Check (LDPC) codes to a combination of constituent sub-channels, which is a mixture of Gaussian channels with erasures. This model, for example, represents a common channel in magnetic recordings where thermal asperities in the system are detected and represented at the decoder as erasures. Although this channel is practically useful, we cannot find any previous work that evaluates performance of LDPC codes over this channel. We are also interested in practical issues such as designing robust LDPC codes for the mixture channel and predicting performance variations due to erasure patterns (random and burst), and finite block lengths.
On time varying channels, a common error control strategy is to adapt the coding rate according to available channel state information (CSI). An effective way to realize this coding strategy is to use a single code and puncture it in a rate-compatible fashion, a so-called rate-compatible punctured code (RCPC). We are interested in the existence of good puncturing patterns for rate-changes that minimize performance loss. We show the existence of good puncturing patterns with analysis and verify the results with simulations.
Universality of a channel code across a broad range of coding rates is a theoretically interesting topic. We are interested in the possibility of using the puncturing technique proposed in this thesis for designing universal LDPC codes. We also consider how to design high rate LDPC codes by puncturing low rate LDPC codes. The new design method can take advantage of longer effect block lengths, sparser parity-check matrices, and larger minimum distances of low rate LDPC codes.
|
662 |
Revising Selected Written Patient Education Materials Through Readability and ConcretenessGoolsby, Rhonda Denise 2010 August 1900 (has links)
The current state of much research on written patient education materials (WPEM)
suggests that they are written in a manner that is too difficult even for educated patients
to understand and remember. Much of the research in this area is focused on
modification of the readability of WPEM, which has shown to be relatively ineffective.
In this study, an attempt was made to determine if a theory-based method in revising
WPEM for improved comprehensibility and memorability was effective.
The effectiveness of three versions of WPEM regarding breast self-exams (BSEs)
was examined; the original version without illustrations obtained from the American
Cancer Society website, a version that was written at a lower readability level as
measured by the Flesch-Kincaid readability formula, and a version with a lower
readability level as measured by the Flesch-Kincaid readability formula and the
increased use of concrete language as suggested by Dual Coding Theory. The researcher
compared the percentage of recall of idea units recalled by 76 participants at two time
periods: immediately after reading the randomly assigned version of WPEM and seven
days after the initial reading.
The WPEM that contained the lower readability level and concrete language was
most recalled by participants both at immediate recall and delayed recall. In fact, the
delayed recall of the WPEM that contained the lower readability level and concrete
language after the seven-day period was almost equivalent to the immediate recall of the
participants in the other two groups. A significant main effect was found for the forms of
WPEM, F(2, 73) = 27.69, p = .00, n2
p = .43 with an observed power of 1.00. A
significant main effect was found for time, F(1, 73) = 161.94, p <.00, n2
p = .69 with an
observed power of 1.00. A significant interaction of WPEM and time was found, F(2,
73) = 5.07, p = .01, n2
p = .12 with an observed power of .80.
Reported levels of frequency of performing BSEs and levels of confidence in
performing BSEs were also analyzed using the Wilcoxon Signed Ranks Test in relation
to the three WPEM versions over time. Reported frequency was significantly greater
after seven days, regardless of form of WPEM (WPEM A, p = .32; WPEM B, p = 1.00;
WPEM C, p = .74). Levels of confidence were significantly greater after seven days,
regardless of form of WPEM (WPEM A, p = ..02; WPEM B, p = .00; WPEM C, p =
.00).
Overall results indicate that combining reduced readability and increased
concrete language is beneficial. The writing of WPEMs in a way that patients can
understand should be supported by a theory, and infusing Dual Coding Theory in the
writing of selected WPEMs may be beneficial for patients.
|
663 |
Error concealment for H.264 video transmissionMazataud, Camille 08 July 2009 (has links)
Video coding standards such as H.264 AVC (Advanced Video Coding) rely on predictive coding to achieve high compression efficiency. Predictive coding consists of predicting each frame using preceding frames. However, predictive coding incurs a cost when transmitting over unreliable networks: frames are no longer independent and the loss of data in one frame may affect future frames. In this thesis, we study the effectiveness of Flexible Macroblock Ordering (FMO) in mitigating the effect of errors on the decoded video and propose solutions to improve the error concealment on H.264 decoders.
After introducing the subject matter, we present the H.264 profiles and briefly determine their intended applications. Then we describe FMO and justify its usefulness for transmission over lossy networks. More precisely, we study the cost in terms of overheads and the improvements it offers in visual quality for damaged video frames. The unavailability of FMO in most H.264 profiles leads us to design a lossless FMO removal scheme, which allows the playback of FMO-encoded video on non FMO-compliant decoders. Then, we describe the process of removing the FMO structure but also underline some limitations that prevent the application of the scheme. Finally, we assess the induced overheads and propose a model to predict these overheads when FMO Type 1 is employed.
Eventually, we develop a new error concealment method to enhance video quality without relying on channel feedback. This method is shown to be superior to existing methods, including those from the JM reference software and can be applied to compensate for the limitations of the scheme proposed FMO-removal scheme. After introducing our new method, we evaluate its performance and compare it to some classical algorithms.
|
664 |
Informed watermarking and compression of multi-sourcesDikici, Çağatay Baskurt, Atilla January 2008 (has links)
Thèse doctorat : Informatique : Villeurbanne, INSA : 2007. / Thèse rédigée en anglais. Titre provenant de l'écran-titre. Bibliogr. p. 139-147. Publications de l'auteur p. 133-134. Index auteurs.
|
665 |
Adaptive Prädiktionsfehlercodierung für die Hybridcodierung von Videosignalen /Narroschke, Matthias. January 1900 (has links)
Originally presented as the author's thesis--Universität Hannover. / Includes bibliographical references.
|
666 |
Performance comparison between three different bit allocation algorithms inside a critically decimated cascading filter bankWeaver, Michael B. January 2009 (has links)
Thesis (M.S.)--State University of New York at Binghamton, Thomas J. Watson School of Engineering and Applied Science, Department of Electrical and Computer Engineering, 2009. / Includes bibliographical references.
|
667 |
Fault tolerance in distributed systems : a coding-theoretic approachBalasubramanian, Bharath 19 November 2012 (has links)
Distributed systems are rapidly increasing in importance due to the need for scalable computations on huge volumes of data. This fact is reflected in many real-world distributed applications such as Amazon's EC2 cloud computing service, Facebook's Cassandra key-value store or Apache's Hadoop MapReduce framework. Multi-core architectures developed by companies such as Intel and AMD have further brought this to prominence, since workloads can now be distributed across many individual cores. The nodes or entities in such systems are often built using commodity hardware and are prone to physical failures and security vulnerabilities. Achieving fault tolerance in such systems is a challenging task, since it is not easy to observe and control these distributed entities. Replication is a standard approach for fault tolerance in distributed systems. The main advantage of this approach is that the backups incur very little overhead in terms of the time taken for normal operation or recovery. However, replication is grossly wasteful in terms of the number of backups required for fault tolerance. The large number of backups has two major implications. First, the total space or memory required for fault tolerance is considerably high. Second, there is a significant cost of resources such as the power required to run the backup processes. Given the large number of distributed servers employed in real-world applications, it is a hard task to provide fault tolerance while achieving both space and operational efficiency. In the world of data fault tolerance and communication, coding theory is used as the space efficient alternate for replication. A direct application of coding theory to distributed servers, treating the servers as blocks of data, is very inefficient in terms of the updates to the backups. This is primarily because each update to the server will affect many blocks in memory, all of which have to be re-encoded at the backups. This leads us to the following thesis statement: Can we design a mechanism for fault tolerance in distributed systems that combines the space efficiency of coding theory with the low operational overhead of replication? We present a new paradigm to solve this problem, broadly referred to as fusion. We provide fusion-based solutions for two models of computation that are representative of a large class of applications: (i) Systems modeled as deterministic finite state machines and, (ii) Systems modeled as programs containing data structures. For finite state machines, we use the notion of Hamming distances to present a polynomial time algorithm to generate efficient backup state machines. For programs hosting data structures, we use a combination of erasure codes and selective replication to generate efficient backups for most commonly used data structures such as queues, array lists, linked lists, vectors and maps. We present theoretical and experimental results that demonstrate the efficiency of our schemes over replication. Finally, we use our schemes to design an efficient solution for fault tolerance in two real-world applications: Amazons Dynamo key-value store, and Google's MapReduce framework. / text
|
668 |
Optimal data dissemination in stochastic and arbitrary wireless networksLi, Hongxing, 李宏兴 January 2012 (has links)
Data dissemination among wireless devices is an essential application in wireless networks. In contrast to its wired counterparts which have more stable network settings, wireless networks are subject to network dynamics, such as variable network topology, channel availability and capacity, which are due to user mobility, signal collision, random channel fading and scattering, etc. Network dynamics complicate the protocol design for optimal data disseminations. Although the topic has been intensively discussed for many years, existing solutions are still not completely satisfactory, especially for stochastic or arbitrary networks. In this thesis, we address optimal data dissemination in both stochastic and arbitrary wireless networks, using techniques of Lyapunov optimization, graph theory, network coding, multi-resolution coding and successive interference cancellation.
We first discuss the maximization of time-averaged throughput utility over a long run for unicast and multirate multicast, respectively, in stochastic wireless networks without probing into the future. For multi-session unicast communications, a utility-maximizing cross-layer design, composed of joint end-to-end rate control, routing, and channel allocation, is proposed for cognitive radio networks with stochastic primary user occupations. Then, we study optimal multirate multicast to receivers with non-uniform receiving rates, also making dynamic cross-layer decisions, in a general wireless network with both a timevarying topology and random channel capacities, by utilizing random linear network coding and multi-resolution coding. In both solutions, we assume users are selfish and prefer only to relay data for others with strong social ties. Such social selfishness of users is a new constraint in network protocol design. Its impact on efficient data dissemination in wireless networks is largely unstudied, especially under stochastic settings. Lyapunov optimization is applied in our protocol design achieving close-to-optimal utilities.
Next, we turn to latency-minimizing data aggregation in wireless sensor networks having arbitrary network topologies under the physical interference model. Different from our effort for stochastic networks where we target at time-averaged optimality over a long run, the objective here is to minimize the time-span to accomplish one round of aggregation scheduling for all sensors in an arbitrary topology. This problem is NP-hard, involving both aggregation tree construction and collision-free link scheduling. The current literature mostly considers the protocol interference model, which has been shown to be less practical than the physical interference model in characterizing the interference relations in the real world. A distributed solution under the physical interference model is challenging since cumulative interferences from all concurrently transmitting devices need to be well measured. In this thesis, we present a distributed aggregation protocol with an improved approximation ratio as compared with previous work. We then discuss the tradeoff between aggregation latency and energy consumption for arbitrary topologies when the successive interference cancellation technique is in force. Another distributed algorithm is introduced with asymptotic optimality in both aggregation latency and latency-energy tradeoff.
Through theoretical analysis and empirical study, we rigorously examine the optimality of our protocols comparing with both the theoretical optima and existing solutions. / published_or_final_version / Computer Science / Doctoral / Doctor of Philosophy
|
669 |
Wireless systems incorporating full-diversity single-symbol decodable space-time block codes: performance evaluations and developmentsLee, Hoo-jin, 1973- 29 August 2008 (has links)
Not available
|
670 |
Rational Point Counts for del Pezzo Surfaces over Finite Fields and Coding TheoryKaplan, Nathan 30 September 2013 (has links)
The goal of this thesis is to apply an approach due to Elkies to study the distribution of rational point counts for certain families of curves and surfaces over finite fields. A vector space of polynomials over a fixed finite field gives rise to a linear code, and the weight enumerator of this code gives information about point count distributions. The MacWilliams theorem gives a relation between the weight enumerator of a linear code and the weight enumerator of its dual code. For certain codes C coming from families of varieties where it is not known how to determine the distribution of point counts directly, we analyze low-weight codewords of the dual code and apply the MacWilliams theorem and its generalizations to gain information about the weight enumerator of C. These low-weight dual codes can be described in terms of point sets that fail to impose independent conditions on this family of varieties. Our main results concern rational point count distributions for del Pezzo surfaces of degree 2, and for certain families of genus 1 curves. These weight enumerators have interesting geometric and coding theoretic applications for small q. / Mathematics
|
Page generated in 0.0564 seconds