• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 32
  • 10
  • 4
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 58
  • 58
  • 10
  • 8
  • 8
  • 8
  • 7
  • 7
  • 7
  • 7
  • 7
  • 7
  • 6
  • 6
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Low-Complexity Erasure Decoding of Staircase Codes

Clelland, William Stewart 30 August 2023 (has links)
This thesis presents a new low complexity erasure decoder for staircase codes in optical interconnects between data centers. We developed a parallel software simulation environment to measure the performance of the erasure decoding techniques at output error rates relevant to an optical link. Low complexity erasure decoding demonstrated a 0.06dB increase in coding gain when compared to bounded distance decoding at an output error rate of 3 × 10⁻¹². Further, a log-linear extrapolation predicts a gain of 0.09dB at 10⁻¹⁵. This performance improvement is achieved without an increase in the maximum number of decoding iteration and keeping power constant. In addition, we found the optimal position within the decoding window to apply erasure decoding to minimize iteration count and output error rates, as well as the erasure threshold that minimizes the iteration count subject to the constrained erasure decoding structure.
12

Bioinformatic Applications in Protein Low Complexity Regions and Targeted Metagenomics

Dickson, Zachery January 2023 (has links)
Part I: Low complexity regions (LCRs) are common motifs in eukaryotic proteins, despite the fact that they are also mutationally unstable. For LCRs to be widely used and tolerated there must be regulatory mechanisms which compensate for their presence. I have endeavored to characterize the relationships and co-evolution of LCRs with the abundance of the proteins that host them as well as the transcripts which encode them. As the abundance of a gene product is ultimately responsible for its associated phenotype, any relationships have implications for the many neurodegenerative diseases associated with LCR expansion. I found that there are indeed relationships. LCRs are more associated with low abundance proteins, but the opposite is true at the RNA level: LCRs encoding transcripts have higher abundance. Investigating the co-evolution of LCRs and transcript abundance revealed that on short evolutionary timescales indels in LCRs influence the selective pressures on TAb. Viewing LCRs through the previously unexplored lens of abundance has generated new results. Results which, together with explorations of information flow and low-complexity in untranslated regions, expand our knowledge of the functional impacts of LCRs evolution. Part II: A commonly encountered problem in DNA sequencing is a situation where the DNA of interest makes up a small proportion of the DNA in a sample. This challenge can be compounded when the DNA of interest may come from many different organisms. Targeted metagenomics is a set of techniques which aim to bias sequencing results towards the DNA of interest. Many of these techniques rely on carefully designed probes which are specific to targets of interest. I have developed a bioinformatic tool, HUBDesign, to design oligonucleotide probes to capture identifying sequences from a given set of targets of interest. Using HUBDesign, and other methods, I have contributed to projects ranging in context from clinical to ancient DNA. / Thesis / Doctor of Science (PhD) / This thesis describes research in two fields: repetitive protein sequences and methods for sequencing the portions of a sample in which one is most interested. In the first part I describe the general properties of repetitive proteins, establish a connection between the presence of repeats in a protein and the amount of that protein which a cell maintains, and show that these two quantities evolve together. This informs our understanding of evolution and regulation with implications for repeat related diseases and further evolutionary research. In the second part I describe a method for selecting short nucleotide sequences which can be used to capture specifically the DNA of organisms of interest, as well as applications of this and other methods. These contributions are widely applicable as targeted sequencing is useful in fields as far apart as clinical sepsis diagnosis and determining the colour of ancient animals.
13

Increased Substitution Rates in DNA Surrounding Low-Complexity Regions

Lenz, Carolyn 10 1900 (has links)
<p>Previous studies have found that DNA flanking low-complexity regions (LCRs) have an increased substitution rate. Here, the substitution rate was confirmed to increase in the vicinity of LCRs in several primate species, including humans. This effect was also found within human sequences from the 1000 Genomes Project. A strong correlation was found between average substitution rate per site and distance from the LCR, as well as between the proportion of genes with gaps in the alignment at each site and distance from the LCR. Along with substitution rates, dN/dS ratios were also determined for each site, and the proportion of sites undergoing negative selection was found to have a negative relationship with distance from the LCR.</p> <p>Low-complexity regions in proteins often form and extend through the gain or loss of repeated units, a process that is dependent on the presence of a relatively pure string of repeats. Any interruption should disrupt the mechanisms of LCR extension and contraction, inhibiting LCR formation. Despite this, several examples have been found of LCR-coding DNA which are interrupted by introns. While many of these LCRs may be the result of two shorter LCRs forming on opposite sides of an intron, shuffling the order of exons showed that more intron-interrupted LCRs exist than would be expected to occur randomly. Another possible explanation for this phenomenon is the apparent movement of either the LCRs or introns, possibly through recombination or the appearance of new splice sites through the gain of repeat units.</p> / Master of Science (MSc)
14

Low-Complexity Compression Techniques for High Frame Rate Video

Yang, Duo January 2017 (has links)
Recently, video has become one of the most important multimedia resources to be shared in our work and daily life. With the development of high frame rate video (HFV), the write speed from high speed camera array sensor to the massive data storage device has been regarded as the main constraints on HFV applications. In this thesis, some low-complexity compression techniques are proposed for HFV acquisition and transmission. The core technique of our developed codec is the application of Slepian-Wolf coding theorem in video compression. The light-duty encoder employs SW encoding, resulting in lower computational cost. The pixel values are transformed into bit sequences, and then we assemble the bits on same bit plane into 8 bit streams. For each bit plane, there is a statistical BSC being constructed to describe the dependency between the source image and the SI image. Furthermore, an improved coding scheme is applied to exploit the spatial correlation between two consecutive bit planes, which is able to reduce the source coding rates. Different from the encoder, the collaborative heavy-duty decoder shoulders the burden of realizing high reconstruction fidelity. Motion estimation and motion compensation employ the block-matching algorithm to predict the SI image. And then the received syndrome sequence is able to be SW decoded with SI. To realize different compression goals, compression are separated to the original and the downsampled cases. With regard to the compression at the original resolution, it completes after SW decoding. While with respect to compression at reduced resolution, the SW decoded image is necessary to be upsampled by the state-of-the-art learning based SR technique: A+ . Since there are some important image details lost after the resolution resizing, ME and MC is applied to modify the upsampled image again, promoting the reconstruction PSNR. Experimental results show that the proposed low-complexity compression techniques are effective on improving reconstruction fidelity and compression ratio. / Thesis / Master of Applied Science (MASc)
15

New Results on Selection Diversity over Fading Channels

Zhao, Qiang 05 March 2003 (has links)
This thesis develops a mathematical framework for analyzing the average bit error rate performance of five different selection diversity combining schemes over slow, frequency non-selective Rayleigh, Nakagami-m and Ricean fading channels. Aside from the classical selection diversity, generalized selection combining and the "maximum output" selection methods, two new selection rules based on choosing the branch providing the largest magnitude of log-likelihood ratio for binary phase shift keying signals (with and without phase compensation in the selection process) are also investigated. The proposed analytical framework is sufficiently general to study the effects of dissimilar fading parameter and unequal mean received signal strengths across the independent diversity paths. The effect of branch correlation on the performance of a dual-diversity system is also studied. The accuracies of our analytical expressions have been validated by extensive Monte-Carlo simulation runs. The proposed selection schemes based on the log-likelihood ratio are attractive in the design of low-complexity rake receivers for wideband CDMA and ultra wideband communication systems. / Master of Science
16

Micronetworking: Reliable Communication on 3D Integrated Circuits

Contreras, Andres A. 01 May 2010 (has links)
The potential failure in through-silicon vias (TSVs) still poses a challenge in trying to extend the useful life of a 3D integrated circuit (IC). A model is proposed to mitigate the communication problem in 3D integrated circuits caused by the breaks at the TSVs. We provide the details of a low-complexity network that takes advantages of redundant TSVs to make it possible to re-route around breaks and maintain effective communication between layers. Different configurations for the micronetwork are analyzed and discussed. We also present an evaluation of the micronetwork's performance, which turns out to be quite promising, based on several Monte Carlo simulations. Finally, we provide some directions for future research on the subject.
17

Turbo Equalization for HSPA / Turboutjämning för HSPA

Konuskan, Cagatay January 2010 (has links)
<p>New high quality mobile telecommunication services are offered everyday and the demand for higher data rates is continuously increasing. To maximize the uplink throughput in HSPA when transmission is propagated through a dispersive channel causing self-interference, equalizers are used. One interesting solution, where the equalizer and decoder exchange information in an iterative way, for improving the equalizer performance is Turbo equalization.</p><p>In this thesis a literature survey has been performed on Turbo equalization methods and a chosen method has been implemented for the uplink HSPA standard to evaluate the performance in heavily dispersive channels. The selected algorithm has been adapted for multiple receiving antennas, oversampled processing and HARQ retransmissions. The results derived from the computer based link simulations show that the implemented algorithm provide a gain of approximately 0.5 dB when performing up to 7 Turbo equalization iterations. Gains up to 1 dB have been obtained by disabling power control, not using retransmission combining and utilizing a single receiver antenna. The algorithm has also been evaluated considering alternative dispersive channels, Log-MAP decoding, different code rates, number of Turbo equalization iterations and number of Turbo decoding iterations.</p><p>The simulation results do not motivate a real implementation of the chosen algorithm considering the increased computational complexity and small gain achieved in a full featured receiver system. Further studies are needed before concluding the HSPA uplink Turbo equalization approach.</p>
18

Turbo Equalization for HSPA / Turboutjämning för HSPA

Konuskan, Cagatay January 2010 (has links)
New high quality mobile telecommunication services are offered everyday and the demand for higher data rates is continuously increasing. To maximize the uplink throughput in HSPA when transmission is propagated through a dispersive channel causing self-interference, equalizers are used. One interesting solution, where the equalizer and decoder exchange information in an iterative way, for improving the equalizer performance is Turbo equalization. In this thesis a literature survey has been performed on Turbo equalization methods and a chosen method has been implemented for the uplink HSPA standard to evaluate the performance in heavily dispersive channels. The selected algorithm has been adapted for multiple receiving antennas, oversampled processing and HARQ retransmissions. The results derived from the computer based link simulations show that the implemented algorithm provide a gain of approximately 0.5 dB when performing up to 7 Turbo equalization iterations. Gains up to 1 dB have been obtained by disabling power control, not using retransmission combining and utilizing a single receiver antenna. The algorithm has also been evaluated considering alternative dispersive channels, Log-MAP decoding, different code rates, number of Turbo equalization iterations and number of Turbo decoding iterations. The simulation results do not motivate a real implementation of the chosen algorithm considering the increased computational complexity and small gain achieved in a full featured receiver system. Further studies are needed before concluding the HSPA uplink Turbo equalization approach.
19

Low-Complexity Interleaver Design for Turbo Codes

List, Nancy Brown 12 July 2004 (has links)
A low-complexity method of interleaver design, sub-vector interleaving, for both parallel and serially concatenated convolutional codes (PCCCs and SCCCs, respectively) is presented here. Since the method is low-complexity, it is uniquely suitable for designing long interleavers. Sub-vector interleaving is based on a dynamical system representation of the constituent encoders employed by PCCCs and SCCCs. Simultaneous trellis termination can be achieved with a single tail sequence using sub-vector interleaving for both PCCCs and SCCCs. In the case of PCCCs, the error floor can be lowered by sub-vector interleaving which allows for an increase in the weight of the free distance codeword and the elimination of the lowest weight codewords generated by weight-2 terminating input sequences that determine the error floor at low signal-to-noise ratios (SNRs). In the case of SCCCs, sub-vector interleaving lowers the error floor by increasing the weight of the free distance codewords. Interleaver gain can also be increased for SCCCs by interleaving the lowest weight codewords from the outer into non-terminating input sequences to the inner encoder. Sub-vector constrained S-random interleaving, a method for incorporating S-random interleaving into sub-vector interleavers, is also proposed. Simulations show that short interleavers incorporating S-random interleaving into sub-vector interleavers perform as well as or better than those designed by the best and most complex methods for designing short interleavers. A method for randomly generating sub-vector constrained S-random interleavers that maximizes the spreading factor, S, is also examined. The convergence of the turbo decoding algorithm to maximum-likelihood decisions on the decoded input sequence is required to demonstrate the improvement in BER performance caused by the use of sub-vector interleavers. Convergence to maximum-likelihood decisions by the decoder do not always occur in the regions where it is feasible to generate the statistically significant numbers of error events required to approximate the BER performance for a particular coding scheme employing a sub-vector interleaver. Therefore, a technique for classifying error events by the mode of convergence of the decoder is used to illuminate the effect of the sub-vector interleaver at SNRs where it is possible to simulate the BER performance of the coding scheme.
20

High-performance scheduling algorithms for wireless networks

Bodas, Shreeshankar Ravishankar 02 February 2011 (has links)
The problem of designing scheduling algorithm for multi-channel (e.g., OFDM-based) wireless downlink networks is considered, where the system has a large bandwidth and proportionally large number of users to serve. For this system, while the classical MaxWeight algorithm is known to be throughput-optimal, its buffer-overflow performance is very poor (formally, it is shown that it has zero rate function in our setting). To address this, a class of algorithms called iHLQF (iterated Heaviest matching with Longest Queues First) is proposed. The algorithms in this class are shown to be throughput-optimal for a general class of arrival/channel processes, and also rate-function optimal (i.e., exponentially small buffer overflow probability) for certain arrival/channel processes, where the channel-rates are 0 or 1 packets per timeslot. iHLQF however has higher computational complexity than MaxWeight (n⁴ vs. n² computations per timeslot respectively). To overcome this issue, a new algorithm called SSG (Server-Side Greedy) is proposed. It is shown that SSG is throughput-optimal, results in a much better per-user buffer overflow performance than the MaxWeight algorithm (positive rate function for certain arrival/channel processes), and has a computational complexity (n²) that is comparable to the MaxWeight algorithm. Thus, it provides a nice trade-off between buffer-overflow performance and computational complexity. For multi-rate channel processes, where the channels can serve multiple packets per timeslot, new Markov chain-based coupling arguments are used to derive rate-function positivity results for the SSG algorithm. Finally, an algorithm called DMEQ is proposed and shown to be rate-function optimal for certain multi-rate channel scenarios, whose definition characterizes the sufficient conditions for rate-function optimality in this regime. These results are validated by both analysis and simulations. / text

Page generated in 0.0585 seconds