Spelling suggestions: "subject:"brassington"" "subject:"assinging""
151 |
Multi-User Detection of Overloaded Systems with Low-Density SpreadingFantuz, Mitchell 11 September 2019 (has links)
Future wireless networks will have applications that require many devices to be connected to the network. Non-orthogonal multiple access (NOMA) is a promising multiple access scheme that allows more users to simultaneously transmit in a common channel than orthogonal signaling techniques. This overloading allows for high spectral efficiencies which can support the high demand for wireless access. One notable NOMA scheme is low-density spreading (LDS), which is a code domain multiple access scheme. Low density spreading operates like code division multiple access (CDMA) in the sense that users use a spreading sequence to spread their data, but the spreading sequences have a low number of nonzero chips, hence the term low-density. The message passing algorithm (MPA) is typically used for multi-user detection (MUD) of LDS systems. The MPA detector has complexity that is exponential to the number of users contributing to each chip. LDS systems suffer from two inherent problems: high computational complexity, and vulnerability to multipath channels. In this thesis, these two problems are addressed. A lower complexity MUD technique is presented, which offers complexity that is proportional to the number of users squared. The proposed detector is based on minimum mean square error (MMSE) and parallel interference cancellation (PIC) detectors. Simulation results show the proposed MUD technique achieves reductions in multiplications and additions by 81.84% and 67.87% with a loss of about 0.25 dB with overloading at 150%. In addition, a precoding scheme designed to mitigate the effects of the multipath channel is also presented. This precoding scheme applies an inverse channel response to the input signal before transmission. This allows for the received signal to eliminate the multipath effects that destroy the low-density structure.
|
152 |
Analyzing the Roles of Descriptions and Actions in Open SystemsHewitt, Carl, Jong, Peter de 01 April 1983 (has links)
This paper analyzes relationships between the roles of descriptions and actions in large scale, open ended, geographically distributed, concurrent systems. Rather than attempt to deal with the complexities and ambiguities of currently implemented descriptive languages, we concentrate our analysis on what can be expressed in the underlying frameworks such as the lambda calculus and first order logic. By this means we conclude that descriptions and actions complement one another: neither being sufficient unto itself. This paper provides a basis to begin the analysis of the very subtle relationships that hold between descriptions and actions in Open Systems.
|
153 |
The female-to-male transsexual voice: Physiology vs. performance in productionJanuary 2012 (has links)
Results of the three studies on the speech production of female-to-male transgender individuals (transmen) present phonetic evidence that speech produces the transmen by what I termed triple decoupling. Transmen successfully decouple gender from biological sex. The results of the longitudinal studies exemplified that speakers born and raised female do not necessarily need to have a female voicing source or filter function. Both qualitative changes can he achieved (to different degree) by bringing exogenous testosterone into the system that virilizes both source and filter over time. Moreover, the cross-sectional study showed that articulatory gestures can be modified to move the acoustic targets towards a gendered target one is striving to present. The acoustic manifestations of transmen with different partner attraction offers the next type of decoupling, that between sexual orientation and gender identity. The results of the cross-sectional study imply that female-born individuals attracted to men do not necessarily have to identify as women. They can opt out of this self-identification by selectively adopting features associated with the gay cismale speaking style. This is suggested by the fact that sexual orientation was found to have a significant effect on the durational and spectral quality of fricatives /s/ and /s/, formant values and sentential pitch range. Finally, the longitudinal studies provide evidence for the third type of decoupling, which comes in the form of gender breaking free from physiology. The recurring "reverse J-pattern" of both the transitioning source and filter, as well as the mean fundamental frequency raising above the pitch floor illustrate the fact that transmen do not feel obliged to sound as masculine (as low-pitched and "low-formanted") as testosterone enables them to. This final type of decoupling also serves to demonstrate that many transmen decidedly do not opt in to the binary system of sex / gender even though they are physiologically able to do so. Although LGB speaking styles have been investigated before, this dissertation is the first to discuss a number of acoustic descriptors specifically in transmen's speech and place them into the context of hormone treatment, sexual orientation and disclosure status.
|
154 |
Algorithmes itératifs à faible complexité pour le codage de canal et le compressed sensingDanjean, Ludovic 29 November 2012 (has links) (PDF)
L'utilisation d'algorithmes itératifs est aujourd'hui largement répandue dans tous les domaines du traitement du signal et des communications numériques. Dans les systèmes de communications modernes, les algorithmes itératifs sont utilisés dans le décodage des codes "low-density parity-check" (LDPC), qui sont une classe de codes correcteurs d'erreurs utilisés pour leurs performances exceptionnelles en terme de taux d'erreur. Dans un domaine plus récent qu'est le "compressed sensing", les algorithmes itératifs sont utilisés comme méthode de reconstruction afin de recouvrer un signal ''sparse" à partir d'un ensemble d'équations linéaires, appelées observations. Cette thèse traite principalement du développement d'algorithmes itératifs à faible complexité pour les deux domaines mentionnés précédemment, à savoir le design d'algorithmes de décodage à faible complexité pour les codes LDPC, et le développement et l'analyse d'un algorithme de reconstruction à faible complexité, appelé ''Interval-Passing Algorithm (IPA)'', dans le cadre du "compressed sensing". Dans la première partie de cette thèse, nous traitons le cas des algorithmes de décodage des codes LDPC. Il est maintenu bien connu que les codes LDPC présentent un phénomène dit de ''plancher d'erreur" en raison des échecs de décodage des algorithmes de décodage traditionnels du types propagation de croyances, et ce en dépit de leurs excellentes performances de décodage. Récemment, une nouvelle classe de décodeurs à faible complexité, appelés ''finite alphabet iterative decoders (FAIDs)'' ayant de meilleures performances dans la zone de plancher d'erreur, a été proposée. Dans ce manuscrit nous nous concentrons sur le problème de la sélection de bons décodeurs FAID pour le cas de codes LDPC ayant un poids colonne de 3 et le cas du canal binaire symétrique. Les méthodes traditionnelles pour la sélection des décodeurs s'appuient sur des techniques asymptotiques telles que l'évolution de densité, mais qui ne garantit en rien de bonnes performances sur un code de longueurs finies surtout dans la région de plancher d'erreur. C'est pourquoi nous proposons ici une méthode de sélection qui se base sur la connaissance des topologies néfastes au décodage pouvant être présente dans un code en utilisant le concept de "trapping sets bruités''. Des résultats de simulation sur différents codes montrent que les décodeurs FAID sélectionnés grâce à cette méthode présentent de meilleures performance dans la zone de plancher d'erreur comparé au décodeur à propagation de croyances. Dans un second temps, nous traitons le sujet des algorithmes de reconstruction itératifs pour le compressed sensing. Des algorithmes itératifs ont été proposés pour ce domaine afin de réduire la complexité induite de la reconstruction par "linear programming''. Dans cette thèse nous avons modifié et analysé un algorithme de reconstruction à faible complexité dénommé IPA utilisant les matrices creuses comme matrices de mesures. Parallèlement aux travaux réalisés dans la littérature dans la théorie du codage, nous analysons les échecs de reconstruction de l'IPA et établissons le lien entre les "stopping sets'' de la représentation binaire des matrices de mesure creuses. Les performances de l'IPA en font un bon compromis entre la complexité de la reconstruction sous contrainte de minimisation de la norme $ell_1$ et le très simple algorithme dit de vérification.
|
155 |
Generalized Survey PropagationTu, Ronghui 09 May 2011 (has links)
Survey propagation (SP) has recently been discovered as an efficient algorithm in solving classes of hard constraint-satisfaction problems (CSP). Powerful as it is, SP is still a heuristic algorithm, and further understanding its algorithmic nature, improving its effectiveness and extending its applicability are highly desirable.
Prior to the work in this thesis, Maneva et al. introduced a Markov Random Field (MRF) formalism for k-SAT problems, on which SP may be viewed as a special case of the well-known belief propagation (BP) algorithm. This result had sometimes been interpreted to an understanding that “SP is BP” and allows a rigorous extension of SP to a “weighted” version, or a family of algorithms, for k-SAT problems.
SP has also been generalized, in a non-weighted fashion, for solving non-binary CSPs. Such generalization is however presented using statistical physics language and somewhat difficult to access by more general audience.
This thesis generalizes SP both in terms of its applicability to non-binary problems and in terms of introducing “weights” and extending SP to a family of algorithms. Under a generic formulation of CSPs, we first present an understanding of non-weighted SP for arbitrary CSPs in terms of “probabilistic token passing” (PTP).
We then show that this probabilistic interpretation of non-weighted SP makes it naturally generalizable to a weighted version, which we call weighted PTP.
Another main contribution of this thesis is a disproof of the folk belief that “SP is BP”. We show that the fact that SP is a special case of BP for k-SAT problems is rather incidental. For more general CSPs, SP and generalized SP do not reduce from BP. We also established the conditions under which generalized SP may reduce as special cases of BP.
To explore the benefit of generalizing SP to a wide family and for arbitrary, particularly non-binary, problems, we devised a simple weighted PTP based algorithm for solving 3-COL problems. Experimental results, compared against an existing non-weighted SP based algorithm, reveal the potential performance gain that generalized SP may bring.
|
156 |
A Comparative Evaluation of Plastic Property Test Methods for Self-consolidating Concrete and Their Relationships with Hardened PropertiesShindman, Benjamin 25 August 2011 (has links)
Self-consolidating concrete (SCC) is a special type of concrete that flows under its own weight and spreads readily into place while remaining stable. Although SCC technology has been rapidly progressing over the last 20 years and continues to develop, the relationships between the fresh, hardened and durability properties of SCC are not well documented.
The focus of this investigation is twofold. Firstly, the use of SCC necessitates reliable and accurate characterization of material properties. A variety of laboratory test methods are used to evaluate SCC’s plastic properties. Recognizing that various test methods evaluate the same plastic properties, there is a need to critically investigate the adequacy and sensitivity of each test. Secondly, outcomes from this project are expected to advance the fundamental understanding of the interplay between the fresh properties of SCC and their implications on hardened properties and durability performance.
|
157 |
A Comparative Evaluation of Plastic Property Test Methods for Self-consolidating Concrete and Their Relationships with Hardened PropertiesShindman, Benjamin 25 August 2011 (has links)
Self-consolidating concrete (SCC) is a special type of concrete that flows under its own weight and spreads readily into place while remaining stable. Although SCC technology has been rapidly progressing over the last 20 years and continues to develop, the relationships between the fresh, hardened and durability properties of SCC are not well documented.
The focus of this investigation is twofold. Firstly, the use of SCC necessitates reliable and accurate characterization of material properties. A variety of laboratory test methods are used to evaluate SCC’s plastic properties. Recognizing that various test methods evaluate the same plastic properties, there is a need to critically investigate the adequacy and sensitivity of each test. Secondly, outcomes from this project are expected to advance the fundamental understanding of the interplay between the fresh properties of SCC and their implications on hardened properties and durability performance.
|
158 |
Generalized Survey PropagationTu, Ronghui 09 May 2011 (has links)
Survey propagation (SP) has recently been discovered as an efficient algorithm in solving classes of hard constraint-satisfaction problems (CSP). Powerful as it is, SP is still a heuristic algorithm, and further understanding its algorithmic nature, improving its effectiveness and extending its applicability are highly desirable.
Prior to the work in this thesis, Maneva et al. introduced a Markov Random Field (MRF) formalism for k-SAT problems, on which SP may be viewed as a special case of the well-known belief propagation (BP) algorithm. This result had sometimes been interpreted to an understanding that “SP is BP” and allows a rigorous extension of SP to a “weighted” version, or a family of algorithms, for k-SAT problems.
SP has also been generalized, in a non-weighted fashion, for solving non-binary CSPs. Such generalization is however presented using statistical physics language and somewhat difficult to access by more general audience.
This thesis generalizes SP both in terms of its applicability to non-binary problems and in terms of introducing “weights” and extending SP to a family of algorithms. Under a generic formulation of CSPs, we first present an understanding of non-weighted SP for arbitrary CSPs in terms of “probabilistic token passing” (PTP).
We then show that this probabilistic interpretation of non-weighted SP makes it naturally generalizable to a weighted version, which we call weighted PTP.
Another main contribution of this thesis is a disproof of the folk belief that “SP is BP”. We show that the fact that SP is a special case of BP for k-SAT problems is rather incidental. For more general CSPs, SP and generalized SP do not reduce from BP. We also established the conditions under which generalized SP may reduce as special cases of BP.
To explore the benefit of generalizing SP to a wide family and for arbitrary, particularly non-binary, problems, we devised a simple weighted PTP based algorithm for solving 3-COL problems. Experimental results, compared against an existing non-weighted SP based algorithm, reveal the potential performance gain that generalized SP may bring.
|
159 |
Exit charts based analysis and design of rateless codes for the erasure and Gaussian channelsMothi Venkatesan, Sabaresan 02 June 2009 (has links)
Luby Transform Codes were the first class of universal erasure codes introduced
to fully realize the concept of scalable and fault‐tolerant distribution of data over
computer networks, also called Digital Fountain. Later Raptor codes, a generalization of
the LT codes were introduced to trade off complexity with performance. In this work,
we show that an even broader class of codes exists that are near optimal for the
erasure channel and that the Raptor codes form a special case. More precisely, Raptorlike
codes can be designed based on an iterative (joint) decoding schedule wherein
information is transferred between the LT decoder and an outer decoder in an iterative
manner. The design of these codes can be formulated as a LP problem using EXIT Charts
and density evolution. In our work, we show the existence of codes, other than the
Raptor codes, that perform as good as the existing ones.
We extend this framework of joint decoding of the component codes to the
additive white Gaussian noise channels and introduce the design of Rateless codes for
these channels. Under this setting, for asymptotic lengths, it is possible to design codes
that work for a class of channels defined by the signal‐to‐noise ratio. In our work, we
show that good profiles can be designed using density evolution and Gaussian
approximation. EXIT charts prove to be an intuitive tool and aid in formulating the code
design problem as a LP problem. EXIT charts are not exact because of the inherent
approximations. Therefore, we use density evolution to analyze the performance of these codes. In the Gaussian case, we show that for asymptotic lengths, a range of
designs of Rateless codes exists to choose from based on the required complexity and
the overhead.
Moreover, under this framework, we can design incrementally redundant
schemes for already existing outer codes to make the communication system more
robust to channel noise variations.
|
160 |
Performance Comparison Of Message Passing Decoding Algorithms For Binary And Non-binary Low Density Parity Check (ldpc) CodesUzunoglu, Cihan 01 December 2007 (has links) (PDF)
In this thesis, we investigate the basics of Low-Density Parity-Check (LDPC) codes
over binary and non-binary alphabets. We especially focus on the message passing
decoding algorithms, which have different message definitions such as a posteriori
probabilities, log-likelihood ratios and Fourier transforms of probabilities. We
present the simulation results that compare the performances of small block length
binary and non-binary LDPC codes, which have regular and irregular structures
over GF(2),GF(4) and GF(8) alphabets. We observe that choosing non-binary
alphabets improve the performance with careful selection of mean column weight
by comparing LDPC codes with variable node degrees of 3, 2.8 and 2.6, since it is
effective in the order of GF(2), GF(4) and GF(8) performances.
|
Page generated in 0.0579 seconds