• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 449
  • 103
  • 99
  • 49
  • 43
  • 20
  • 17
  • 14
  • 11
  • 10
  • 7
  • 7
  • 6
  • 6
  • 4
  • Tagged with
  • 945
  • 165
  • 128
  • 107
  • 101
  • 96
  • 94
  • 94
  • 93
  • 88
  • 80
  • 73
  • 70
  • 70
  • 67
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
211

A NEW TEST TO BUILD CONFIDENCE REGIONS USING BALANCED MINIMUM EVOLUTION

Dai, Wei 16 August 2013 (has links)
In phylogenetic analysis, an important issue is to construct the confidence region for gene trees from DNA sequences. Usually estimation of the trees is the initial step. Maximum likelihood methods are widely applied but few tests are based on distance methods. In this thesis, we propose a new test based on balanced minimum evolution. We first examine the normality assumption of pairwise distance estimates under various model misspeci cations and also examine their variances, MSEs and squared biases. Then we compare the BME method with the WLS method in true tree reconstruction under different variance structures and model pairs. Finally, we develop a new test for finding a confidence region for the tree based on the BME method and demonstrate its effectiveness through simulation.
212

Power System State Estimation Using Phasor Measurement Units

Chen, Jiaxiong 01 January 2013 (has links)
State estimation is widely used as a tool to evaluate the real time power system prevailing conditions. State estimation algorithms could suffer divergence under stressed system conditions. This dissertation first investigates impacts of variations of load levels and topology errors on the convergence property of the commonly used weighted least square (WLS) state estimator. The influence of topology errors on the condition number of the gain matrix in the state estimator is also analyzed. The minimum singular value of gain matrix is proposed to measure the distance between the operating point and state estimation divergence. To study the impact of the load increment on the convergence property of WLS state estimator, two types of load increment are utilized: one is the load increment of all load buses, and the other is a single load increment. In addition, phasor measurement unit (PMU) measurements are applied in state estimation to verify if they could solve the divergence problem and improve state estimation accuracy. The dissertation investigates the impacts of variations of line power flow increment and topology errors on convergence property of the WLS state estimator. A simple 3-bus system and the IEEE 118-bus system are used as the test cases to verify the common rule. Furthermore, the simulation results show that adding PMU measurements could generally improve the robustness of state estimation. Two new approaches for improving the robustness of the state estimation with PMU measurements are proposed. One is the equality-constrained state estimation with PMU measurements, and the other is Hachtel's matrix state estimation with PMU measurements approach. The dissertation also proposed a new heuristic approach for optimal placement of phasor measurement units (PMUs) in power system for improving state estimation accuracy. In the problem of adding PMU measurements into the estimator, two methods are investigated. Method I is to mix PMU measurements with conventional measurements in the estimator, and method II is to add PMU measurements through a post-processing step. These two methods can achieve very similar state estimation results, but method II is a more time-efficient approach which does not modify the existing state estimation software.
213

Causal inference and case-control studies with applications related to childhood diabetes / Kausal inferens och fall-kontroll studier med applikationer inom barndiabetes

Persson, Emma January 2014 (has links)
This thesis contributes to the research area of causal inference, where estimation of the effect of a treatment on an outcome of interest is the main objective. Some aspects of the estimation of average causal effects in observational studies in general, and case-control studies in particular, are explored. An important part of estimating causal effects in an observational study is to control for covariates. The first paper of this thesis concerns the selection of minimal covariate sets sufficient for unconfoundedness of the treatment assignment. A data-driven implementation of two covariate selection algorithms is proposed and evaluated. A common sampling scheme in epidemiology, and when investigating rare events, is the case-control design. In the second paper we study estimators of the marginal causal odds ratio in matched and independent case-control designs. Estimators that, under a logistic regression model, utilize information about the known prevalence of being a case is examined and compared through simulations. The third paper investigates the particular situation where case-control sampled data is reused to estimate the effect of the case-defining event on an outcome of interest. The consequence of ignoring the design when estimating the average causal effect is discussed and a design-weighted matching estimator is proposed. The performance of the estimator is evaluated with simulation experiments, when matching on the covariates directly and when matching on the propensity score. The last paper studies the effect of type 1 diabetes mellitus (T1DM) on school achievements using data from the Swedish Childhood Diabetes Register, a population-based incidence register. We apply theoretical results from the second and third papers in the estimation of the average causal effect within the T1DM population. A matching estimator that accounts for the matched case-control design is used.
214

Contributions to 3D Image Analysis using Discrete Methods and Fuzzy Techniques : With Focus on Images from Cryo-Electron Tomography

Gedda, Magnus January 2010 (has links)
With the emergence of new imaging techniques, researchers are always eager to push the boundaries by examining objects either smaller or further away than what was previously possible. The development of image analysis techniques has greatly helped to introduce objectivity and coherence in measurements and decision making. It has become an essential tool for facilitating both large-scale quantitative studies and qualitative research. In this Thesis, methods were developed for analysis of low-resolution (in respect to the size of the imaged objects) three-dimensional (3D) images with low signal-to-noise ratios (SNR) applied to images from cryo-electron tomography (cryo-ET) and fluorescence microscopy (FM). The main focus is on methods of low complexity, that take into account both grey-level and shape information, to facilitate large-scale studies. Methods were developed to localise and represent complex macromolecules in images from cryo-ET. The methods were applied to Immunoglobulin G (IgG) antibodies and MET proteins. The low resolution and low SNR required that grey-level information was utilised to create fuzzy representations of the macromolecules. To extract structural properties, a method was developed to use grey-level-based distance measures to facilitate decomposition of the fuzzy representations into sub-domains. The structural properties of the MET protein were analysed by developing a analytical curve representation of its stalk. To facilitate large-scale analysis of structural properties of nerve cells, a method for tracing neurites in FM images using local path-finding was developed. Both theoretical and implementational details of computationally heavy approaches were examined to keep the time complexity low in the developed methods. Grey-weighted distance definitions and various aspects of their implementations were examined in detail to form guidelines on which definition to use in which setting and which implementation is the fastest. Heuristics were developed to speed up computations when calculating grey-weighted distances between two points. The methods were evaluated on both real and synthetic data and the results show that the methods provide a step towards facilitating large-scale studies of images from both cryo-ET and FM.
215

Two-dimensional Finite Volume Weighted Essentially Non-oscillatory Euler Schemes With Uniform And Non-uniform Grid Coefficients

Elfarra, Monier Ali 01 February 2005 (has links) (PDF)
In this thesis, Finite Volume Weighted Essentially Non-Oscillatory (FV-WENO) codes for one and two-dimensional discretised Euler equations are developed. The construction and application of the FV-WENO scheme and codes will be described. Also the effects of the grid coefficients as well as the effect of the Gaussian Quadrature on the solution have been tested and discussed. WENO schemes are high order accurate schemes designed for problems with piecewise smooth solutions containing discontinuities. The key idea lies at the high approximation level, where a convex combination of all the candidate stencils is used with certain weights. Those weights are used to eliminate the stencils, which contain discontinuity. WENO schemes have been quite successful in applications, especially for problems containing both shocks and complicated smooth solution structures. The applications tested in this thesis are the Diverging Nozzle, Shock Vortex Interaction, Supersonic Channel Flow, Flow over Bump, and supersonic Staggered Wedge Cascade. The numerical solutions for the diverging nozzle and the supersonic channel flow are compared with the analytical solutions. The results for the shock vortex interaction are compared with the Roe scheme results. The results for the bump flow and the supersonic staggered cascade are compared with results from literature.
216

Minimising weighted mean distortion : a thesis presented in partial fulfilment of the requirements for the degree of Doctor of Philosophy in Mathematics at Massey University, Albany, New Zealand

McKubre-Jordens, Maarten Nicolaas January 2009 (has links)
There has been considerable recent interest in geometric function theory, nonlinear partial differential equations, harmonic mappings, and the connection of these to minimal energy phenomena. This work explores Nitsche's 1962 conjecture concerning the nonexistence of harmonic mappings between planar annuli, cast in terms of distortion functionals. The connection between the Nitsche problem and the famous Grötzsch problem is established by means of a weight function. Traditionally, these kinds of problems are investigated in the class of quasiconformal mappings, and the assumption is usually made a priori that solutions preserve various symmetries. Here the conjecture is solved in the much wider class of mappings of finite distortion, symmetry-preservation is proved, and ellipticity of the variational equations concerning these sorts of general problems is established. Furthermore, various alternative interpretations of the weight function introduced herein lead to an interesting analysis of a much wider variety of critical phenomena -- when the weight function is interpreted as a thickness, density or metric, the results lead to a possible model for tearing or breaking phenomena in material science. These physically relevant critical phenomena arise, surprisingly, out of purely theoretical considerations.
217

Local search methods for constraint problems

Muhammad, Muhammad Rafiq Bin Unknown Date (has links) (PDF)
This thesis investigates the use of local search methods in solving constraint problems. Such problems are very hard in general and local search offers a useful and successful alternative to existing techniques. The focus of the thesis is to analyze the techniques of invariants used in local search. The use of invariants have recently become the cornerstone of local search technology as they provide a declarative way to specify incremental algorithms. We have produced a series of program libraries in C++ known as the One-Way-Solver. The One-Way-Solver includes the implementation of incremental data structures and is a useful tool for the implementation of local search. The One-Way-Solver is applied to two challenging constraint problems, the Boolean Satisfiability Testing (SAT) and university course timetabling problems.
218

Some problems in high dimensional data analysis

Pham, Tung Huy January 2010 (has links)
The bloom of economics and technology has had an enormous impact on society. Along with these developments, human activities nowadays produce massive amounts of data that can be easily collected for relatively low cost with the aid of new technologies. Many examples can be mentioned here including data from web term-document data, sensor arrays, gene expression, finance data, imaging and hyperspectral analysis. Because of the enormous amount of data from various different and new sources, more and more challenging scientific problems appear. These problems have changed the types of problems which mathematical scientists work. / In traditional statistics, the dimension of the data, p say, is low, with many observations, n say. In this case, classical rules such as the Central Limit Theorem are often applied to obtain some understanding from data. A new challenge to statisticians today is dealing with a different setting, when the data dimension is very large and the number of observations is small. The mathematical assumption now could be p > n, or even p goes to infinity and n fixed in many cases, for example, there are few patients with many genes. In these cases, classical methods fail to produce a good understanding of the nature of the problem. Hence, new methods need to be found to solve these problems. Mathematical explanations are also needed to generalize these cases. / The research preferred in this thesis includes two problems: Variable selection and Classification, in the case where the dimension is very large. The work on variable selection problems, in particular the Adaptive Lasso was completed by June 2007 and the research on classification has been carried out through out 2008 and 2009. The research on the Dantzig selector and the Lasso were finished in July 2009. Therefore, this thesis is divided into two parts. In the first part of the thesis we study the Adaptive Lasso, the Lasso and the Dantzig selector. In particular, in Chapter 2 we present some results for the Adaptive Lasso. Chapter 3 will provides two examples that show that neither the Dantzig selector or the Lasso is definitely better than the other. The second part of the thesis is organized as follows. In Chapter 5, we shall construct the model setting. In Chapter 6, we summarize the results of the scaled centroid-based classifier. We also prove some results on the scaled centroid-based classifier. Because there are similarities between the Support Vector Machine (SVM) and Distance Weighted Discrimination (DWD) classifiers, Chapter 8 introduces a class of distance-based classifiers that could be considered a generalization of the SVM and DWD classifiers. Chapters 9 and 10 are about the SVM and DWD classifiers. Chapter 11 demonstrates the performance of these classifiers on simulated data sets and some cancer data sets.
219

Automatic emotion recognition: an investigation of acoustic and prosodic parameters

Sethu, Vidhyasaharan , Electrical Engineering & Telecommunications, Faculty of Engineering, UNSW January 2009 (has links)
An essential step to achieving human-machine speech communication with the naturalness of communication between humans is developing a machine that is capable of recognising emotions based on speech. This thesis presents research addressing this problem, by making use of acoustic and prosodic information. At a feature level, novel group delay and weighted frequency features are proposed. The group delay features are shown to emphasise information pertaining to formant bandwidths and are shown to be indicative of emotions. The weighted frequency feature, based on the recently introduced empirical mode decomposition, is proposed as a compact representation of the spectral energy distribution and is shown to outperform other estimates of energy distribution. Feature level comparisons suggest that detailed spectral measures are very indicative of emotions while exhibiting greater speaker specificity. Moreover, it is shown that all features are characteristic of the speaker and require some of sort of normalisation prior to use in a multi-speaker situation. A novel technique for normalising speaker-specific variability in features is proposed, which leads to significant improvements in the performances of systems trained and tested on data from different speakers. This technique is also used to investigate the amount of speaker-specific variability in different features. A preliminary study of phonetic variability suggests that phoneme specific traits are not modelled by the emotion models and that speaker variability is a more significant problem in the investigated setup. Finally, a novel approach to emotion modelling that takes into account temporal variations of speech parameters is analysed. An explicit model of the glottal spectrum is incorporated into the framework of the traditional source-filter model, and the parameters of this combined model are used to characterise speech signals. An automatic emotion recognition system that takes into account the shape of the contours of these parameters as they vary with time is shown to outperform a system that models only the parameter distributions. The novel approach is also empirically shown to be on par with human emotion classification performance.
220

Some problems in high dimensional data analysis

Pham, Tung Huy January 2010 (has links)
The bloom of economics and technology has had an enormous impact on society. Along with these developments, human activities nowadays produce massive amounts of data that can be easily collected for relatively low cost with the aid of new technologies. Many examples can be mentioned here including data from web term-document data, sensor arrays, gene expression, finance data, imaging and hyperspectral analysis. Because of the enormous amount of data from various different and new sources, more and more challenging scientific problems appear. These problems have changed the types of problems which mathematical scientists work. / In traditional statistics, the dimension of the data, p say, is low, with many observations, n say. In this case, classical rules such as the Central Limit Theorem are often applied to obtain some understanding from data. A new challenge to statisticians today is dealing with a different setting, when the data dimension is very large and the number of observations is small. The mathematical assumption now could be p > n, or even p goes to infinity and n fixed in many cases, for example, there are few patients with many genes. In these cases, classical methods fail to produce a good understanding of the nature of the problem. Hence, new methods need to be found to solve these problems. Mathematical explanations are also needed to generalize these cases. / The research preferred in this thesis includes two problems: Variable selection and Classification, in the case where the dimension is very large. The work on variable selection problems, in particular the Adaptive Lasso was completed by June 2007 and the research on classification has been carried out through out 2008 and 2009. The research on the Dantzig selector and the Lasso were finished in July 2009. Therefore, this thesis is divided into two parts. In the first part of the thesis we study the Adaptive Lasso, the Lasso and the Dantzig selector. In particular, in Chapter 2 we present some results for the Adaptive Lasso. Chapter 3 will provides two examples that show that neither the Dantzig selector or the Lasso is definitely better than the other. The second part of the thesis is organized as follows. In Chapter 5, we shall construct the model setting. In Chapter 6, we summarize the results of the scaled centroid-based classifier. We also prove some results on the scaled centroid-based classifier. Because there are similarities between the Support Vector Machine (SVM) and Distance Weighted Discrimination (DWD) classifiers, Chapter 8 introduces a class of distance-based classifiers that could be considered a generalization of the SVM and DWD classifiers. Chapters 9 and 10 are about the SVM and DWD classifiers. Chapter 11 demonstrates the performance of these classifiers on simulated data sets and some cancer data sets.

Page generated in 0.0492 seconds