• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 291
  • 113
  • 32
  • 31
  • 15
  • 13
  • 8
  • 7
  • 7
  • 6
  • 5
  • 3
  • 2
  • 2
  • 1
  • Tagged with
  • 604
  • 604
  • 213
  • 118
  • 101
  • 99
  • 97
  • 82
  • 78
  • 65
  • 62
  • 61
  • 55
  • 53
  • 51
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
191

An Investigation into Classification of High Dimensional Frequency Data

McGraw, John M. 25 October 2001 (has links)
We desire an algorithm to classify a physical object in ``real-time" using an easily portable probing device. The probe excites a given object at frequencies from 100 MHz up to 800 MHz at intervals of 0.5 MHz. Thus the data used for classification is the 1400-component vector of these frequency responses. The Interdisciplinary Center for Applied Mathematics (ICAM) was asked to help develop an algorithm and executable computer code for the probing device to use in its classification analysis. Due to these and other requirements, all work had to be done in Matlab. Hence a significant portion of the effort was spent in writing and testing applicable Matlab code which incorporated the various statistical techniques implemented. We offer three approaches to classification: maximum log-likelihood estimates, correlation coefficients, and confidence bands. Related work included considering ways to recover and exploit certain symmetry characteristics of the objects (using the response data). Present investigations are not entirely conclusive, but the correlation coefficient classifier seems to produce reasonable and consistent results. All three methods currently require the evaluation of the full 1400-component vector. It has been suggested that unknown portions of the vectors may include extraneous and misleading information, or information common to all classes. Identifying and removing the respective components may be beneficial to classification regardless of method. Another advantage of dimension reduction should be a strengthening of mean and covariance estimates. / Master of Science
192

Enhancements in Markovian Dynamics

Ali Akbar Soltan, Reza 12 April 2012 (has links)
Many common statistical techniques for modeling multidimensional dynamic data sets can be seen as variants of one (or multiple) underlying linear/nonlinear model(s). These statistical techniques fall into two broad categories of supervised and unsupervised learning. The emphasis of this dissertation is on unsupervised learning under multiple generative models. For linear models, this has been achieved by collective observations and derivations made by previous authors during the last few decades. Factor analysis, polynomial chaos expansion, principal component analysis, gaussian mixture clustering, vector quantization, and Kalman filter models can all be unified as some variations of unsupervised learning under a single basic linear generative model. Hidden Markov modeling (HMM), however, is categorized as an unsupervised learning under multiple linear/nonlinear generative models. This dissertation is primarily focused on hidden Markov models (HMMs). On the first half of this dissertation we study enhancements on the theory of hidden Markov modeling. These include three branches: 1) a robust as well as a closed-form parameter estimation solution to the expectation maximization (EM) process of HMMs for the case of elliptically symmetrical densities; 2) a two-step HMM, with a combined state sequence via an extended Viterbi algorithm for smoother state estimation; and 3) a duration-dependent HMM, for estimating the expected residency frequency on each state. Then, the second half of the dissertation studies three novel applications of these methods: 1) the applications of Markov switching models on the Bifurcation Theory in nonlinear dynamics; 2) a Game Theory application of HMM, based on fundamental theory of card counting and an example on the game of Baccarat; and 3) Trust modeling and the estimation of trustworthiness metrics in cyber security systems via Markov switching models. As a result of the duration dependent HMM, we achieved a better estimation for the expected duration of stay on each regime. Then by robust and closed form solution to the EM algorithm we achieved robustness against outliers in the training data set as well as higher computational efficiency in the maximization step of the EM algorithm. By means of the two-step HMM we achieved smoother probability estimation with higher likelihood than the standard HMM. / Ph. D.
193

When stuff gets old: material surface characteristics and the visual perception of material change over time

De Korte, Ellen E.M., Logan, Andrew J., Bloj, Marina 23 October 2022 (has links)
Yes / Materials’ surfaces change over time due to chemical and physical processes. These processes can significantly alter a material’s visual appearance, yet we can recognise the material as the same. The present study examined the extent of changes the human visual system can detect in specific materials over time. Participants (N = 5) were shown images of different materials (Banana, Copper, Leaf) from an existing calibrated set of photographs. Participants indicated which image pair (of the 2 pairs shown) displayed the largest difference. Estimated perceptual scales showed that observers were able to rank the images of aged materials systematically. Next, we examined the role that global and local changes in material surface colour play in the perception of material change. We altered the information about colour and geometrical distribution in the images used in the first experiment, and participants repeated the task with the altered images. Our results showed significant differences between individual observers. Most importantly, participants’ ability to rank the images varied with material type. The leaf images were particularly affected by our alteration of the geometrical distribution. Together, our findings show the factors contributing to the perception of material change over time. / This work was supported by the European Union’s Horizon 2020 research and innovation programme [Grant Agreement No 765121].
194

Modelos de regressão beta com erro nas variáveis / Beta regression model with measurement error

Carrasco, Jalmar Manuel Farfan 25 May 2012 (has links)
Neste trabalho de tese propomos um modelo de regressão beta com erros de medida. Esta proposta é uma área inexplorada em modelos não lineares na presença de erros de medição. Abordamos metodologias de estimação, como máxima verossimilhança aproximada, máxima pseudo-verossimilhança aproximada e calibração da regressão. O método de máxima verossimilhança aproximada determina as estimativas maximizando diretamente o logaritmo da função de verossimilhança. O método de máxima pseudo-verossimilhança aproximada é utilizado quando a inferência em um determinado modelo envolve apenas alguns mas não todos os parâmetros. Nesse sentido, dizemos que o modelo apresenta parâmetros de interesse como também de perturbação. Quando substituímos a verdadeira covariável (variável não observada) por uma estimativa da esperança condicional da variável não observada dada a observada, o método é conhecido como calibração da regressão. Comparamos as metodologias de estimação mediante um estudo de simulação de Monte Carlo. Este estudo de simulação evidenciou que os métodos de máxima verossimilhança aproximada e máxima pseudo-verossimilhança aproximada tiveram melhor desempenho frente aos métodos de calibração da regressão e naïve (ingênuo). Utilizamos a linguagem de programação Ox (Doornik, 2011) como suporte computacional. Encontramos a distribuição assintótica dos estimadores, com o objetivo de calcular intervalos de confiança e testar hipóteses, tal como propõem Carroll et. al.(2006, Seção A.6.6), Guolo (2011) e Gong e Samaniego (1981). Ademais, são utilizadas as estatísticas da razão de verossimilhanças e gradiente para testar hipóteses. Num estudo de simulação realizado, avaliamos o desempenho dos testes da razão de verossimilhanças e gradiente. Desenvolvemos técnicas de diagnóstico para o modelo de regressão beta com erros de medida. Propomos o resíduo ponderado padronizado tal como definem Espinheira (2008) com o objetivo de verificar as suposições assumidas ao modelo e detectar pontos aberrantes. Medidas de influência global, tais como a distância de Cook generalizada e o afastamento da verossimilhança, são utilizadas para detectar pontos influentes. Além disso, utilizamos a técnica de influência local conformal sob três esquemas de perturbação (ponderação de casos, perturbação da variável resposta e perturbação da covariável com e sem erros de medida). Aplicamos nossos resultados a dois conjuntos de dados reais para exemplificar a teoria desenvolvida. Finalmente, apresentamos algumas conclusões e possíveis trabalhos futuros. / In this thesis, we propose a beta regression model with measurement error. Among nonlinear models with measurement error, such a model has not been studied extensively. Here, we discuss estimation methods such as maximum likelihood, pseudo-maximum likelihood, and regression calibration methods. The maximum likelihood method estimates parameters by directly maximizing the logarithm of the likelihood function. The pseudo-maximum likelihood method is used when the inference in a given model involves only some but not all parameters. Hence, we say that the model under study presents parameters of interest, as well as nuisance parameters. When we replace the true covariate (observed variable) with conditional estimates of the unobserved variable given the observed variable, the method is known as regression calibration. We compare the aforementioned estimation methods through a Monte Carlo simulation study. This simulation study shows that maximum likelihood and pseudo-maximum likelihood methods perform better than the calibration regression method and the naïve approach. We use the programming language Ox (Doornik, 2011) as a computational tool. We calculate the asymptotic distribution of estimators in order to calculate confidence intervals and test hypotheses, as proposed by Carroll et. al (2006, Section A.6.6), Guolo (2011) and Gong and Samaniego (1981). Moreover, we use the likelihood ratio and gradient statistics to test hypotheses. We carry out a simulation study to evaluate the performance of the likelihood ratio and gradient tests. We develop diagnostic tests for the beta regression model with measurement error. We propose weighted standardized residuals as defined by Espinheira (2008) to verify the assumptions made for the model and to detect outliers. The measures of global influence, such as the generalized Cook\'s distance and likelihood distance, are used to detect influential points. In addition, we use the conformal approach for evaluating local influence for three perturbation schemes: case-weight perturbation, respose variable perturbation, and perturbation in the covariate with and without measurement error. We apply our results to two sets of real data to illustrate the theory developed. Finally, we present our conclusions and possible future work.
195

Métodos de estimação baseados na função de verossimilhança para modelos lineares elípticos / Estimation methods based on the likelihood function in Elliptical Linear Models

Pérez, Natalia Andrea Milla 14 September 2018 (has links)
O objetivo desta tese é estudar métodos de estimação baseados na função de verossimilhança em modelos mistos lineares elípticos. Derivamos inicialmente os métodos de máxima verossimilhança, máxima verossimilhança restrita e de máxima verossimilhança perfilada modificada para o modelo linear normal. Estendemos os métodos para os modelos lineares elípticos e encontramos diferenças entre as equações resultantes de cada método. A principal motivação deste trabalho é que o método de máxima verossimilhança restrita tem sido aplicado para obter estimadores menos viesados para os componentes de variância-covariância, em contraste com os estimadores de máxima verossimilhança. O método tem sido muito utilizado em modelos com estruturas de variância-covariância como é o caso dos modelos mistos lineares. Assim, procuramos estender o método para os modelos mistos lineares elípticos bem como comparar com outros procedimentos de estimação, máxima verossimilhança e máxima verossimilhança perfilada modificada. Estudamos em particular os modelos mistos lineares com erros t-Student e exponencial potência. / The aim of this thesis is to study estimation methods based on the likelihood functions in elliptical linear mixed models. First, we review the modified profile maximum likelihood and the restricted maximum likelihood methods as well as the traditional maximum likelihood method in normal linear models. Then, we extend the methodologies for elliptical linear models and we compare the estimating equations derived for each method. The main motivation of the work is that the restricted maximum likelihood method has been largely applied in normal linear mixed models in order to reduce the bias of the maximum likelihood variance-component estimators. So, we intend to investigate the possible extension for elliptical linear mixed models as well as to compare with the modified profile maximum likelihood and the maximum likelihood methods. Particular studies for Student-t and power exponential linear mixed models are presented.
196

Modelos de regressão beta com erro nas variáveis / Beta regression model with measurement error

Jalmar Manuel Farfan Carrasco 25 May 2012 (has links)
Neste trabalho de tese propomos um modelo de regressão beta com erros de medida. Esta proposta é uma área inexplorada em modelos não lineares na presença de erros de medição. Abordamos metodologias de estimação, como máxima verossimilhança aproximada, máxima pseudo-verossimilhança aproximada e calibração da regressão. O método de máxima verossimilhança aproximada determina as estimativas maximizando diretamente o logaritmo da função de verossimilhança. O método de máxima pseudo-verossimilhança aproximada é utilizado quando a inferência em um determinado modelo envolve apenas alguns mas não todos os parâmetros. Nesse sentido, dizemos que o modelo apresenta parâmetros de interesse como também de perturbação. Quando substituímos a verdadeira covariável (variável não observada) por uma estimativa da esperança condicional da variável não observada dada a observada, o método é conhecido como calibração da regressão. Comparamos as metodologias de estimação mediante um estudo de simulação de Monte Carlo. Este estudo de simulação evidenciou que os métodos de máxima verossimilhança aproximada e máxima pseudo-verossimilhança aproximada tiveram melhor desempenho frente aos métodos de calibração da regressão e naïve (ingênuo). Utilizamos a linguagem de programação Ox (Doornik, 2011) como suporte computacional. Encontramos a distribuição assintótica dos estimadores, com o objetivo de calcular intervalos de confiança e testar hipóteses, tal como propõem Carroll et. al.(2006, Seção A.6.6), Guolo (2011) e Gong e Samaniego (1981). Ademais, são utilizadas as estatísticas da razão de verossimilhanças e gradiente para testar hipóteses. Num estudo de simulação realizado, avaliamos o desempenho dos testes da razão de verossimilhanças e gradiente. Desenvolvemos técnicas de diagnóstico para o modelo de regressão beta com erros de medida. Propomos o resíduo ponderado padronizado tal como definem Espinheira (2008) com o objetivo de verificar as suposições assumidas ao modelo e detectar pontos aberrantes. Medidas de influência global, tais como a distância de Cook generalizada e o afastamento da verossimilhança, são utilizadas para detectar pontos influentes. Além disso, utilizamos a técnica de influência local conformal sob três esquemas de perturbação (ponderação de casos, perturbação da variável resposta e perturbação da covariável com e sem erros de medida). Aplicamos nossos resultados a dois conjuntos de dados reais para exemplificar a teoria desenvolvida. Finalmente, apresentamos algumas conclusões e possíveis trabalhos futuros. / In this thesis, we propose a beta regression model with measurement error. Among nonlinear models with measurement error, such a model has not been studied extensively. Here, we discuss estimation methods such as maximum likelihood, pseudo-maximum likelihood, and regression calibration methods. The maximum likelihood method estimates parameters by directly maximizing the logarithm of the likelihood function. The pseudo-maximum likelihood method is used when the inference in a given model involves only some but not all parameters. Hence, we say that the model under study presents parameters of interest, as well as nuisance parameters. When we replace the true covariate (observed variable) with conditional estimates of the unobserved variable given the observed variable, the method is known as regression calibration. We compare the aforementioned estimation methods through a Monte Carlo simulation study. This simulation study shows that maximum likelihood and pseudo-maximum likelihood methods perform better than the calibration regression method and the naïve approach. We use the programming language Ox (Doornik, 2011) as a computational tool. We calculate the asymptotic distribution of estimators in order to calculate confidence intervals and test hypotheses, as proposed by Carroll et. al (2006, Section A.6.6), Guolo (2011) and Gong and Samaniego (1981). Moreover, we use the likelihood ratio and gradient statistics to test hypotheses. We carry out a simulation study to evaluate the performance of the likelihood ratio and gradient tests. We develop diagnostic tests for the beta regression model with measurement error. We propose weighted standardized residuals as defined by Espinheira (2008) to verify the assumptions made for the model and to detect outliers. The measures of global influence, such as the generalized Cook\'s distance and likelihood distance, are used to detect influential points. In addition, we use the conformal approach for evaluating local influence for three perturbation schemes: case-weight perturbation, respose variable perturbation, and perturbation in the covariate with and without measurement error. We apply our results to two sets of real data to illustrate the theory developed. Finally, we present our conclusions and possible future work.
197

Verallgemeinerte Maximum-Likelihood-Methoden und der selbstinformative Grenzwert

Johannes, Jan 16 December 2002 (has links)
Es sei X eine Zufallsvariable mit unbekannter Verteilung P. Zu den Hauptaufgaben der Mathematischen Statistik zählt die Konstruktion von Schätzungen für einen abgeleiteten Parameter theta(P) mit Hilfe einer Beobachtung X=x. Im Fall einer dominierten Verteilungsfamilie ist es möglich, das Maximum-Likelihood-Prinzip (MLP) anzuwenden. Eine Alternative dazu liefert der Bayessche Zugang. Insbesondere erweist sich unter Regularitätsbedingungen, dass die Maximum-Likelihood-Schätzung (MLS) dem Grenzwert einer Folge von Bayesschen Schätzungen (BSen) entspricht. Eine BS kann aber auch im Fall einer nicht dominierten Verteilungsfamilie betrachtet werden, was als Ansatzpunkt zur Erweiterung des MLPs genutzt werden kann. Weiterhin werden zwei Ansätze einer verallgemeinerten MLS (vMLS) von Kiefer und Wolfowitz sowie von Gill vorgestellt. Basierend auf diesen bekannten Ergebnissen definieren wir einen selbstinformativen Grenzwert und einen selbstinformativen a posteriori Träger. Im Spezialfall einer dominierten Verteilungsfamilie geben wir hinreichende Bedingungen an, unter denen die Menge der MLSen einem selbstinformativen a posteriori Träger oder, falls die MLS eindeutig ist, einem selbstinformativen Grenzwert entspricht. Das Ergebnis für den selbstinformativen a posteriori Träger wird dann auf ein allgemeineres Modell ohne dominierte Verteilungsfamilie erweitert. Insbesondere wird gezeigt, dass die Menge der vMLSen nach Kiefer und Wolfowitz ein selbstinformativer a posteriori Träger ist. Weiterhin wird der selbstinformative Grenzwert bzw. a posteriori Träger in einem Modell mit nicht identifizierbarem Parameter bestimmt. Im Mittelpunkt dieser Arbeit steht ein multivariates semiparametrisches lineares Modell. Zunächst weisen wir jedoch nach, dass in einem rein nichtparametrischen Modell unter der a priori Annahme eines Dirichlet Prozesses der selbstinformative Grenzwert existiert und mit der vMLS nach Kiefer und Wolfowitz sowie der nach Gill übereinstimmt. Anschließend untersuchen wir das multivariate semiparametrische lineare Modell und bestimmen die vMLSen nach Kiefer und Wolfowitz bzw. nach Gill sowie den selbstinformativen Grenzwert unter der a priori Annahme eines Dirichlet Prozesses und einer Normal-Wishart-Verteilung. Im Allgemeinen sind die so erhaltenen Schätzungen verschieden. Abschließend gehen wir dann auf den Spezialfall eines semiparametrischen Lokationsmodells ein, in dem die vMLSen nach Kiefer und Wolfowitz bzw. nach Gill und der selbstinformative Grenzwert wieder identisch sind. / We assume to observe a random variable X with unknown probability distribution. One major goal of mathematical statistics is the estimation of a parameter theta(P) based on an observation X=x. Under the assumption that P belongs to a dominated family of probability distributions, we can apply the maximum likelihood principle (MLP). Alternatively, the Bayes approach can be used to estimate the parameter. Under some regularity conditions it turns out that the maximum likelihood estimate (MLE) is the limit of a sequence of Bayes estimates (BE's). Note that BE's can even be defined in situations where no dominating measure exists. This allows us to derive an extension of the MLP using the Bayes approach. Moreover, two versions of a generalised MLE (gMLE) are presented, which have been introduced by Kiefer and Wolfowitz and Gill, respectively. Based on the known results, we define a selfinformative limit and a posterior carrier. In the special case of a model with dominated distribution family, we state sufficient conditions under which the set of MLE's is a selfinformative posterior carrier or, in the case of a unique MLE, a selfinformative limit. The result for the posterior carrier is extended to a more general model without dominated distributions. In particular we show that the set of gMLE's of Kiefer and Wolfowitz is a posterior carrier. Furthermore we calculate the selfinformative limit and posterior carrier, respectively, in the case of a model with possibly nonidentifiable parameters. In this thesis we focus on a multivariate semiparametric linear model. At first we show that, in the case of a nonparametric model, the selfinformative limit coincides with the gMLE of Kiefer and Wolfowitz as well as that of Gill, if a Dirichlet process serves as prior. Then we investigate both versions of gMLE's and the selfinformative limit in the multivariate semiparametric linear model, where the prior for the latter estimator is given by a Dirichlet process and a normal-Wishart distribution. In general the estimators are not identical. However, in the special case of a location model we find again that the three considered estimates coincide.
198

Optimierung der Positronen-Emissions-Tomographie bei der Schwerionentherapie auf der Basis von Röntgentomogrammen

Pönisch, Falk 16 April 2003 (has links) (PDF)
Die Positronen-Emissions-Tomographie (PET) bei der Schwerionentherapie ist eine wichtige Methode zur Qualitätskontrolle in der Tumortherapie mit Kohlenstoffionen. Die vorliegende Arbeit beschreibt die Verbesserungen des PET-Verfahrens, wodurch sich in der Folge präzisere Aussagen zur Dosisapplikation treffen lassen. Aufbauend auf den Grundlagen (Kap. 2) werden die Neuentwicklungen in den drei darauf folgenden Abschnitten (Modellierung des Abbildungsprozesses bei der PET, Streukorrektur für PET bei der Schwerionentherapie, Verarbeitung der rekonstruierten PET-Daten) beschrieben. Die PET-Methode bei der Schwerionentherapie basiert auf dem Vergleich zwischen den gemessenen und vorausberechneten Aktivitätsverteilungen. Die verwendeten Modelle in der Simulation (Erzeugung der Positronenemitter, deren Ausbreitung, der Transport und der Nachweis der Annihilationsquanten) sollten so präzise wie möglich sein, damit ein aussagekräftiger Vergleich möglich wird. Die Genauigkeit der Beschreibung der physikalischen Prozesse wurde verbessert und zeiteffiziente Algorithmen angewendet, die zu einer erheblichen Verkürzung der Rechenzeit führen. Die erwarteten bzw. die gemessenen räumlichen Radioaktivitätsverteilungen werden mit einem iterativen Verfahren rekonstruiert [Lau99]. Die gemessenen Daten müssen hinsichtlich der im Messobjekt auftretenden Comptonstreuung der Annihilationsphotonen korrigiert werden. Es wird ein geeignetes Verfahren zur Streukorrektur für die Therapieüberwachung vorgeschlagen und dessen Realisierung beschrieben. Zur Einschätzung der Güte der Behandlung wird die gemessene und die simulierte Aktivitätsverteilung verglichen. Dazu wurde im Rahmen der vorliegenden Arbeit eine Software entwickelt, das die rekonstruierten PET-Daten visualisiert und die anatomischen Informationen des Röntgentomogramms mit einbezieht. Nur durch dieses Auswerteverfahren war es möglich, Fehler im physikalischen Strahlmodell aufzudecken und somit die Bestrahlungsplanung zu verbessern.
199

Estimation And Hypothesis Testing In Stochastic Regression

Sazak, Hakan Savas 01 December 2003 (has links) (PDF)
Regression analysis is very popular among researchers in various fields but almost all the researchers use the classical methods which assume that X is nonstochastic and the error is normally distributed. However, in real life problems, X is generally stochastic and error can be nonnormal. Maximum likelihood (ML) estimation technique which is known to have optimal features, is very problematic in situations when the distribution of X (marginal part) or error (conditional part) is nonnormal. Modified maximum likelihood (MML) technique which is asymptotically giving the estimators equivalent to the ML estimators, gives us the opportunity to conduct the estimation and the hypothesis testing procedures under nonnormal marginal and conditional distributions. In this study we show that MML estimators are highly efficient and robust. Moreover, the test statistics based on the MML estimators are much more powerful and robust compared to the test statistics based on least squares (LS) estimators which are mostly used in literature. Theoretically, MML estimators are asymptotically minimum variance bound (MVB) estimators but simulation results show that they are highly efficient even for small sample sizes. In this thesis, Weibull and Generalized Logistic distributions are used for illustration and the results given are based on these distributions. As a future study, MML technique can be utilized for other types of distributions and the procedures based on bivariate data can be extended to multivariate data.
200

B-Spline Based Multitarget Tracking

Sithiravel, Rajiv January 2014 (has links)
Multitarget tracking in the presence of false alarm is a difficult problem to consider. The objective of multitarget tracking is to estimate the number of targets and their states recursively from available observations. At any given time, targets can be born, die and spawn from already existing targets. Sensors can detect these targets with a defined threshold, where normally the observation is influenced by false alarm. Also if the targets are with low signal to noise ratio (SNR) then the targets may not be detected. The Random Finite Set (RFS) filters can be used to solve such multitarget problem efficiently. Specially, one of the best and most widely used RFS based filter is the Probability Hypothesis Density (PHD) filter. The PHD filter approximates the posterior probability density function (PDF) by the first order moment only, where the targets SNR assumed to be much higher. The PHD filter supports targets die, born, spawn and missed-detection by using the well known implementations including Sequential Monte Carlo Probability Hypothesis Density (SMC-PHD) and Gaussian Mixture Probability Hypothesis Density (GM-PHD) methods. The SMC-PHD filter suffers from the well known degeneracy problems while GM-PHD filter may not be suitable for nonlinear and non-Gaussian target tracking problems. It is desirable to have a filter that can provide continuous estimates for any distribution. This is the motivation for the use of B-Splines in this thesis. One of the main focus of the thesis is the B-Spline based PHD (SPHD) filters. The Spline is a well developed theory and been used in academia and industry for more than five decades. The B-Spline can represent any numerical, geometrical and statistical functions and models including the PDF and PHD. The SPHD filter can be applied to linear, nonlinear, Gaussian and non-Gaussian multitarget tracking applications. The SPHD continuity can be maintained by selecting splines with order of three or more, which avoids the degeneracy-related problem. Another important characteristic of the SPHD filter is that the SPHD can be locally controlled, which allow the manipulations of the SPHD and its natural tendency for handling the nonlinear problems. The SPHD filter can be further extended to support maneuvering multitarget tracking, where it can be an alternative to any available PHD filter implementations. The PHD filter does not work well for very low observable (VLO) target tracking problems, where the targets SNR is normally very low. For very low SNR scenarios the PDF must be approximated by higher order moments. Therefore the PHD implementations may not be suitable for the problem considered in this thesis. One of the best estimator to use in VLO target tracking problem is the Maximum-Likelihood Probability Data Association (ML-PDA) algorithm. The standard ML-PDA algorithm is widely used in single target initialization or geolocation problems with high false alarm. The B-Spline is also used in the ML-PDA (SML-PDA) implementations. The SML-PDA algorithm has the capability to determine the global maximum of ML-PDA log-likelihood ratio with high efficiency in terms of state estimates and low computational complexity. For fast passive track initialization, search and rescue operations the SML-PDA algorithm can be used more efficiently compared to the standard ML-PDA algorithm. Also the SML-PDA algorithm with the extension supports the multitarget tracking. / Thesis / Doctor of Philosophy (PhD)

Page generated in 0.4773 seconds