Spelling suggestions: "subject:"computational""
1 |
Soft Demodulation Schemes for MIMO Communication SystemsNekuii, Mehran 08 1900 (has links)
In this thesis, several computationally-efficient approximate soft demodulation schemes are developed for multiple-input multiple-output (MIMO) communication systems. These soft demodulators are designed to be deployed in the conventional iterative receiver ('turbo') architecture, and they are designed to provide good performance at substantially lower computational cost than that of the exact soft demodulator. The proposed demodulators are based on the principle of list demodulation and can be classified into two classes, according to the nature of the list-generation algorithm. One class is based on a tree-search algorithm and the other is based on insight generated from the analysis of semidefinite relaxation techniques for hard demodulation.
The proposed tree-search demodulators are based on a multi-stack algorithm, developed herein, for efficiently traversing the tree structure that is inherent in the MIMO demodulation problem. The proposed scheme was inspired, in part, by the stack algorithm, which stores all the visited nodes in the tree in a single stack and chooses the next node to expand based on a 'best-first' selection scheme. The proposed algorithm partitions this global stack into a stack for each level of the tree. It examines the tree in the natural ordering of the levels and performs a best-first search in each of the stacks. By assigning appropriate priorities to the level at which the search for the next leaf node re-starts, the proposed demodulators can achieve performance-complexity trade-offs that dominate several existing soft demodulators, including those based on the stack algorithm and those based on 'sphere decoding' principles, especially in the low-complexity region.
In the second part of this thesis it is shown that the randomization procedure that is inherent in the semidefinite relaxation (SDR) technique for hard demodulation can be exploited to generate the list members required for list-based soft demodulation. The direct application of this observation yields list-based soft demodulators that only require the solution of one SDP per demodulation-decoding iteration. By approximating the randomization procedure by a set of independent Bernoulli trials, this requirement can be reduced to just one semidefinite program (SDP) per channel use. An advantage of these demodulators over those based on optimal tree-search algorithms is that the computational cost of solving the SDP is a low-order polynomial in the problem size. The analysis and simulation experiments provided in the thesis show that the proposed SDR-based demodulators offer an attractive trade-off between performance and computational cost.
The structure of the SDP in the proposed SDR-based demodulators depends on the signaling scheme, and the initial development focuses on the case of QPSK signaling. In the last chapter of this thesis, the extension to MIMO 16-QAM systems is developed, and some interesting observations regarding some existing SDR-based hard demodulation schemes for MIMO 16-QAM systems are derived. The simulation results reveal that the excellent performance-complexity trade-off of the proposed SDR-based schemes is preserved under the extension to 16-QAM signaling. / Thesis / Doctor of Philosophy (PhD)
|
2 |
A computationally efficient adaptive beamformer for noise fields with unknown covarianceWu, Tsai-Fu January 1988 (has links)
No description available.
|
3 |
Computationally Efficient Methods for Detection and Localization of a Chirp SignalKashyap, Aditya 12 February 2019 (has links)
In this thesis, a computationally efficient method for detecting a whistle and capturing it using a 4 microphone array is proposed. Furthermore, methods are developed to efficiently process the data captured from all the microphones to estimate the direction of the sound source. The accuracy, the shortcoming and the constraints of the method proposed are also discussed. There is an emphasis placed on being computationally efficient so that the methods may be implemented on a low cost microcontroller and be used to provide a heading to an Unmanned Ground Vehicle. / MS / As humans, we rely on our sense of hearing to help us interact with the outside world. It helps us to listen not just to other people but also for sounds that maybe a warning for us. It can often be the first warning we get of an impending danger as we might hear a predator before we see it or we might hear a car brake and slip before we turn to look at it. However, it is not merely the ability to hear a sound that makes hearing so useful. It is the fact that we can tell which direction the sound is coming from that makes it so important. That is what allows us to know which direction to turn towards to respond to someone or from which direction the sound warning us of danger is coming. We may not be able to pinpoint the location of the source with complete accuracy but we can discern the general heading. It was this idea that inspired this research work. We wanted to be capable of estimating where a sound is coming from while being computationally efficient so that it may be implemented in real time with the help of a low cost microcontroller. This would then be used to provide a heading to an Unmanned Ground Vehicle while keeping the costs down.
|
4 |
Human expressivity in the control and integration of computationally generated audioHeinrichs, Christian January 2018 (has links)
While physics-based synthesis offers a wide range of benefits in the real-time generation of sound for interactive environments, it is difficult to incorporate nuanced and complex behaviour that enhances the sound in a narrative or aesthetic context. The work presented in this thesis explores real-time human performance as a means of stylistically augmenting computational sound models. Transdisciplinary in nature, this thesis builds upon previous work in sound synthesis, film sound theory and physical sound interaction. Two levels on which human performance can enhance the aesthetic value of computational models are investigated: first, in the real-time manipulation of an idiosyncratic parameter space to generate unique sound effects, and second, in the performance of physical source models in synchrony with moving images. In the former, various mapping techniques were evaluated to control a model of a creaking door based on a proposed extension of practical synthesis techniques. In the latter, audio post-production professionals with extensive experience in performing Foley were asked to perform the soundtrack to a physics-based animation using bespoke physical interfaces and synthesis engines. The generated dataset was used to gain insights into stylistic features afforded by performed sound synchronisation, and potential ways of integrating them into an interactive environment such as a game engine. Interacting with practical synthesis models that have extended to incorporate performability enables rapid generation of unique and expressive sound effects, while maintaining a believable source-sound relationship. Performatively authoring behaviours of sound models makes it possible to enhance the relationship between sound and image (both stylistically and perceptually) in ways precluded by one-to-one mappings between physics-based parameters. Mediation layers are required in order to facilitate performed behaviour: in the design of the model on one hand, and in the integration of such behaviours into interactive environments on the other. This thesis provides some examples of how such a system could be implemented. Furthermore, some interesting observations are made regarding the design of physical interfaces for performing environmental sound, and the creative exploitation of model constraints.
|
5 |
Fast growing and interpretable oblique trees via logistic regression modelsTruong, Alfred Kar Yin January 2009 (has links)
The classification tree is an attractive method for classification as the predictions it makes are more transparent than most other classifiers. The most widely accepted approaches to tree-growth use axis-parallel splits to partition continuous attributes. Since the interpretability of a tree diminishes as it grows larger, researchers have sought ways of growing trees with oblique splits as they are better able to partition observations. The focus of this thesis is to grow oblique trees in a fast and deterministic manner and to propose ways of making them more interpretable. Finding good oblique splits is a computationally difficult task. Various authors have proposed ways of doing this by either performing stochastic searches or by solving problems that effectively produce oblique splits at each stage of tree-growth. A new approach to finding such splits is proposed that restricts attention to a small but comprehensive set of splits. Empirical evidence shows that good oblique splits are found in most cases. When observations come from a small number of classes, empirical evidence shows that oblique trees can be grown in a matter of seconds. As interpretability is the main strength of classification trees, it is important for oblique trees that are grown to be interpretable. As the proposed approach to finding oblique splits makes use of logistic regression, well-founded variable selection techniques are introduced to classification trees. This allows concise oblique splits to be found at each stage of tree-growth so that oblique trees that are more interpretable can be directly grown. In addition to this, cost-complexity pruning ideas which were developed for axis-parallel trees have been adapted to make oblique trees more interpretable. A major and practical component of this thesis is in providing the oblique.tree package in R that allows casual users to experiment with oblique trees in a way that was not possible before.
|
6 |
Statistical models for social network dynamicsLospinoso, Joshua Alfred January 2012 (has links)
The study of social network dynamics has become an increasingly important component of many disciplines in the social sciences. In the past decade, statistical models and methods have been proposed which permit researchers to draw statistical inference on these dynamics. This thesis builds on one such family of models, the stochastic actor oriented model (SAOM) proposed by Snijders [2001]. Goodness of fit for SAOMs is an area that is only just beginning to be filled in with appropriate methods. This thesis proposes a Mahalanobis distance based, Monte Carlo goodness of fit test that can depend on arbitrary features of the observed network data and covariates. As remediating poor fit can be a difficult process, a modified model distance (MMD) estimator is devised that can help researchers to choose among a set of model elaborations. In practice, panel data is typically used to draw SAOM-based inference. This thesis also proposes a score-type test for time heterogeneity between the waves in the panel that is computationally cheap and fits into a convenient, forward model selecting workflow. Next, this thesis proposes a rigorous method for aggregating so-called relational event data (e.g. emails and phone calls) by extending the SAOM family to a family of hidden Markov models that suppose a latent social network is driving the observed relational events. Finally, this thesis proposes a measurement model for SAOMs inspired by error-in-variables (EiV) models employed in an array of disciplines. Like the relational event aggregation model, the measurement model is a hidden Markov model extension to the SAOM family. These models allow the researcher to specify the form of the mesurement error and buffer against potential attenuating biases and other problems that can arise if the errors are ignored.
|
7 |
Statistical analysis of Likert data on attitudesJavaras, Kristin Nicole January 2004 (has links)
Researchers interested in measuring people's underlying attitudes towards an object (e.g., abortion) often collect Likert data by administering a survey. Likert data consist of surveyees' responses to statements about the object, where responses fall into ordered categories running from `Strongly agree' to `Strongly disagree' or into a `Don't Know / Can't Choose' category. Two examples of Likert data are used for illustrative purposes. The first dataset was collected by the author from American and British graduate students at Oxford University and contains items measuring underlying abortion attitudes. The second dataset was taken from British and American responses to the 1995 National Identity Survey (NIS) and contains items measuring underlying national pride and immigration attitudes. A model for Likert data and underlying attitudes is introduced. This model is more principled than existing models. It treats people's underlying attitudes as latent variables, and it specifies a relationship between underlying attitudes and responses that is consistent with attitudinal research. Further, the formal probability model for responses allows people's interpretation of the response categories to differ. The model is fitted by maximising an appropriate likelihood. Variants of the model are used to analyse Likert data in three contexts; in each, the method using our model compares favourably to existing methods. First, the model is used to visualise the structure underlying the abortion attitude data. This method of visualization produces more sensible plots than analogous multivariate data visualization methods. Second, the model is used to select the statements whose responses (in the abortion attitude data) best reflect underlying abortion attitudes. Our method of statement selection more closely adheres to attitude researchers' stated aims than popular methods based on sample correlations. Third, the model is used to investigate how underlying national pride varies with nationality in the NIS data and also how underlying abortion attitude varies with gender, religious status, and nationality in the abortion attitude data. Unlike methods currently used by social scientists to model the relationship between attitudes and covariates, our method controls for the effects of differing response category interpretation. As a result, inferences about group differences in underlying attitudes are more robust to group differences in response category interpretation.
|
8 |
Development of optimization methods to solve computationally expensive problemsIsaacs, Amitay, Engineering & Information Technology, Australian Defence Force Academy, UNSW January 2009 (has links)
Evolutionary algorithms (EAs) are population based heuristic optimization methods used to solve single and multi-objective optimization problems. They can simultaneously search multiple regions to find global optimum solutions. As EAs do not require gradient information for the search, they can be applied to optimization problems involving functions of real, integer, or discrete variables. One of the drawbacks of EAs is that they require evaluations of numerous candidate solutions for convergence. Most real life engineering design optimization problems involve highly nonlinear objective and constraint functions arising out of computationally expensive simulations. For such problems, the computation cost of optimization using EAs can become quite prohibitive. This has stimulated the research into improving the efficiency of EAs reported herein. In this thesis, two major improvements are suggested for EAs. The first improvement is the use of spatial surrogate models to replace the expensive simulations for the evaluation of candidate solutions, and other is a novel constraint handling technique. These modifications to EAs are tested on a number of numerical benchmarks and engineering examples using a fixed number of evaluations and the results are compared with basic EA. addition, the spatial surrogates are used in the truss design application. A generic framework for using spatial surrogate modeling, is proposed. Multiple types of surrogate models are used for better approximation performance and a prediction accuracy based validation is used to ensure that the approximations do not misguide the evolutionary search. Two EAs are proposed using spatial surrogate models for evaluation and evolution. For numerical benchmarks, the spatial surrogate assisted EAs obtain significantly better (even orders of magnitude better) results than EA and on an average 5-20% improvements in the objective value are observed for engineering examples. Most EAs use constraint handling schemes that prefer feasible solutions over infeasible solutions. In the proposed infeasibility driven evolutionary algorithm (IDEA), a few infeasible solutions are maintained in the population to augment the evolutionary search through the infeasible regions along with the feasible regions to accelerate convergence. The studies on single and multi-objective test problems demonstrate the faster convergence of IDEA over EA. In addition, the infeasible solutions in the population can be used for trade-off studies. Finally, discrete structures optimization (DSO) algorithm is proposed for sizing and topology optimization of trusses. In DSO, topology optimization and sizing optimization are separated to speed up the search for the optimum design. The optimum topology is identified using strain energy based material removal procedure. The topology optimization process correctly identifies the optimum topology for 2-D and 3-D trusses using less than 200 function evaluations. The sizing optimization is performed later to find the optimum cross-sectional areas of structural elements. In surrogate assisted DSO (SDSO), spatial surrogates are used to accelerate the sizing optimization. The truss designs obtained using SDSO are very close (within 7% of the weight) to the best reported in the literature using only a fraction of the function evaluations (less than 7%).
|
9 |
Applying Dynamic Software Updates to Computationally-Intensive ApplicationsKim, Dong Kwan 22 July 2009 (has links)
Dynamic software updates change the code of a computer program while it runs, thus saving the programmer's time and using computing resources more productively. This dissertation establishes the value of and recommends practices for applying dynamic software updates to computationally-intensive applications—a computing domain characterized by long-running computations, expensive computing resources, and a tedious deployment process. This dissertation argues that updating computationally-intensive applications dynamically can reduce their time-to-discovery metrics—the total time it takes from posing a problem to arriving at a solution—and, as such, should become an intrinsic part of their software lifecycle. To support this claim, this dissertation presents the following technical contributions: (1) a distributed consistency algorithm for synchronizing dynamic software updates in a parallel HPC application, (2) an implementation of the Proxy design pattern that is more efficient than the existing implementations, and (3) a dynamic update approach for Java Virtual Machine (JVM)-based applications using the Proxy pattern to offer flexibility and efficiency advantages, making it suitable for computationally-intensive applications. The contributions of this dissertation are validated through performance benchmarks and case studies involving computationally-intensive applications from the bioinformatics and molecular dynamics simulation domains. / Ph. D.
|
10 |
Linear programming algorithms for detecting separated data in binary logistic regression modelsKonis, Kjell Peter January 2007 (has links)
This thesis is a study of the detection of separation among the sample points in binary logistic regression models. We propose a new algorithm for detecting separation and demonstrate empirically that it can be computed fast enough to be used routinely as part of the fitting process for logistic regression models. The parameter estimates of a binary logistic regression model fit using the method of maximum likelihood sometimes do not converge to finite values. This phenomenon (also known as monotone likelihood or infinite parameters) occurs because of a condition among the sample points known as separation. There are two classes of separation. When complete separation is present among the sample points, iterative procedures for maximizing the likelihood tend to break down, when it would be clear that there is a problem with the model. However, when quasicomplete separation is present among the sample points, the iterative procedures for maximizing the likelihood tend to satisfy their convergence criterion before revealing any indication of separation. The new algorithm is based on a linear program with a nonnegative objective function that has a positive optimal value when separation is present among the sample points. We compare several approaches for solving this linear program and find that a method based on determining the feasibility of the dual to this linear program provides a numerically reliable test for separation among the sample points. A simulation study shows that this test can be computed in a similar amount of time as fitting the binary logistic regression model using the method of iteratively reweighted least squares: hence the test is fast enough to be used routinely as part of the fitting procedure. An implementation of our algorithm (as well as the other methods described in this thesis) is available in the R package safeBinaryRegression.
|
Page generated in 0.13 seconds