• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 291
  • 113
  • 32
  • 31
  • 15
  • 13
  • 8
  • 7
  • 7
  • 6
  • 5
  • 3
  • 2
  • 2
  • 1
  • Tagged with
  • 604
  • 604
  • 213
  • 118
  • 101
  • 99
  • 97
  • 82
  • 78
  • 65
  • 62
  • 61
  • 55
  • 53
  • 51
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
141

Simuliertes klassisches Schätzen und Testen in Mehrperioden-Mehralternativen-Probitmodellen

Ziegler, Andreas. Unknown Date (has links) (PDF)
Universiẗat, Diss., 2001--Mannheim.
142

Interleaved frequency-division multiple-access Systembeschreibung sowie Analyse und Optimierung des Übertragungsverhaltens im Mobilfunkkanal /

Broeck, Isabella de. Unknown Date (has links)
Techn. Universiẗat, Diss., 2004--Darmstadt.
143

Shape and topology constrained image segmentation with stochastic models

Zöller, Thomas. Unknown Date (has links) (PDF)
University, Diss., 2005--Bonn.
144

Investigations on linear transformations for speaker adaptation and normalization

Pitz, Michael. Unknown Date (has links) (PDF)
Techn. Hochsch., Diss., 2005--Aachen.
145

Utilização de curvas de crescimento longitudinal com distribuição normal θ-generalizada multivariada, no estudo da disfunção cardíaca em ratos com estenose aórtica supravalvar

Amaral, Magali Teresopolis Reis January 2018 (has links)
Orientador: Carlos Roberto Padovani / Resumo: Em muitas situações, existe a necessidade de estudar o comportamento de alguma característica em uma mesma unidade amostral ao longo do tempo, dose acumulada de algum nutriente ou medicamento. Na prática, a estrutura dos dados dessa natureza geralmente estabelece comportamentos não lineares nos parâmetros de interesse, já que estes caracterizam melhor a realidade biológica pesquisada. Essa conjuntura é propícia ao estudo de remodelação cardíaca (RC) por sobrecarga pressórica em ratos submetidos a diferentes manobras sequenciais de cálcio. Como o comportamento da RC não está claramente estabelecido, o objetivo deste trabalho consiste em fazer um estudo comparativo sobre a performance de quatro modelos de curvas de crescimento em quatro grupos experimentais, considerando erros normais $\theta$ generalizado multivariado. Além disso, a modelagem dos dados envolve duas estruturas de covariância: a homocedástica com a presença de autocorrelação lag 1 e a heterocedástica multiplicativa. No contexto metodológico, utiliza-se o procedimento de estimação por máxima verossimilhança com a aplicação da técnica de reamostragem bootstrap. Além disso, técnicas de simulações são implementadas para comprovação das propriedades metodológicas aplicadas. Para comparação entre os modelos, utilizam-se alguns avaliadores de qualidade de ajuste. Conclui-se, no presente estudo, que a estrutura homocedástica com autocorrelação lag 1 para os modelos Brody e de Von Bertalanffy, destacam-se por apresentar ... (Resumo completo, clicar acesso eletrônico abaixo) / Abstract: Many situations, there is a need to study the behavior of some characteristic in the same sample unit over time, accumulated dose of some nutrient or medication. In practice, the structure of data of this nature generally establishes non-linear behaviors in the parameters of interest, since these characterize better the biological reality researched. This situation is favorable to the study of cardiac remodeling (CR) by pressure overload in rats submitted to different sequential calcium maneuvers. As the behavior of CR is not clearly established, the objective of this work is to perform a comparative study on the performance of four models of growth curves in four experimental groups, considering normalized multivariate θ standard errors. In addition, the data modeling involves two covariance structures: the homocedastic with the presence of autocorrelation lag 1 and the multiplicative heterocedastic. In the methodological context, the procedure of estimation by maximum likelihood is used with the technique of bootstrap resampling. In addition, simulation techniques are implemented to prove the methodological properties applied. For the comparison between the models, some adjustment quality evaluators are used. It is concluded in the present study that the homocedastic structure with lag 1 autocorrelation for the Brody and Von Bertalanffy models stands out for presenting excellent estimates and good quality of adjustment of the maximum developed stress (TD) as a function of t... (Complete abstract click electronic access below) / Doutor
146

Effects of template mass, complexity, and analysis method on the ability to correctly determine the number of contributors to DNA mixtures

Alfonse, Lauren Elizabeth 08 April 2016 (has links)
In traditional forensic DNA casework, the inclusion or exclusion of individuals who may have contributed to an item of evidence may be dependent upon the assumption on the number of individuals from which the evidence arose. Typically, the determination of the minimum number of contributors (NOC) to a mixture is achieved by counting the number of alleles observed above a given analytical threshold (AT); this technique is known as maximum allele count (MAC). However, advances in polymerase chain reaction (PCR) chemistries and improvements in analytical sensitivities have led to an increase in the detection of complex, low template DNA (LtDNA) mixtures for which MAC is an inadequate means of determining the actual NOC. Despite the addition of highly polymorphic loci to multiplexed PCR kits and the advent of interpretation softwares which deconvolve DNA mixtures, a gap remains in the DNA analysis pipeline, where an effective method of determining the NOC needs to be established. The emergence of NOCIt -- a computational tool which provides the probability distribution on the NOC, may serve as a promising alternative to traditional, threshold- based methods. Utilizing user-provided calibration data consisting of single source samples of known genotype, NOCIt calculates the a posteriori probability (APP) that an evidentiary sample arose from 0 to 5 contributors. The software models baseline noise, reverse and forward stutter proportions, stutter and allele dropout rates, and allele heights. This information is then utilized to determine whether the evidentiary profile originated from one or many contributors. In short, NOCIt provides information not only on the likely NOC, but whether more than one value may be deemed probable. In the latter case, it may be necessary to modify downstream interpretation steps such that multiple values for the NOC are considered or the conclusion that most favors the defense is adopted. Phase I of this study focused on establishing the minimum number of single source samples needed to calibrate NOCIt. Once determined, the performance of NOCIt was evaluated and compared to that of two other methods: the maximum likelihood estimator (MLE) -- accessed via the forensim R package, and MAC. Fifty (50) single source samples proved to be sufficient to calibrate NOCIt, and results indicate NOCIt was the most accurate method of the three. Phase II of this study explored the effects of template mass and sample complexity on the accuracy of NOCIt. Data showed that the accuracy decreased as the NOC increased: for 1- and 5-contributor samples, the accuracy was 100% and 20%, respectively. The minimum template mass from any one contributor required to consistently estimate the true NOC was 0.07 ng -- the equivalent of approximately 10 cells' worth of DNA. Phase III further explored NOCIt and was designed to assess its robustness. Because the efficacy of determining the NOC may be affected by the PCR kit utilized, the results obtained from NOCIt analysis of 1-, 2-, 3-, 4-, and 5-contributor mixtures amplified with AmpFlstr® Identifiler® Plus and PowerPlex® 16 HS were compared. A positive correlation was observed for all NOCIt outputs between kits. Additionally, NOCIt was found to result in increased accuracies when analyzed with 1-, 3-, and 4-contributor samples amplified with Identifiler® Plus and with 5-contributor samples amplified with PowerPlex® 16 HS. The accuracy rates obtained for 2-contributor samples were equivalent between kits; therefore, the effect of amplification kit type on the ability to determine the NOC was not substantive. Cumulatively, the data indicate that NOCIt is an improvement to traditional methods of determining the NOC and results in high accuracy rates with samples containing sufficient quantities of DNA. Further, the results of investigations into the effect of template mass on the ability to determine the NOC may serve as a caution that forensic DNA samples containing low-target quantities may need to be interpreted using multiple or different assumptions on the number of contributors, as the assumption on the number of contributors is known to affect the conclusion in certain casework scenarios. As a significant degree of inaccuracy was observed for all methods of determining the NOC at severe low template amounts, the data presented also challenge the notion that any DNA sample can be utilized for comparison purposes. This suggests that the ability to detect extremely complex, LtDNA mixtures may not be commensurate with the ability to accurately interpret such mixtures, despite critical advances in software-based analysis. In addition to the availability of advanced comparison algorithms, limitations on the interpretability of complex, LtDNA mixtures may also be dependent on the amount of biological material present on an evidentiary substrate.
147

Gaussian copula modelling for integer-valued time series

Lennon, Hannah January 2016 (has links)
This thesis is concerned with the modelling of integer-valued time series. The data naturally occurs in various areas whenever a number of events are observed over time. The model considered in this study consists of a Gaussian copula with autoregressive-moving average (ARMA) dependence and discrete margins that can be specified, unspecified, with or without covariates. It can be interpreted as a 'digitised' ARMA model. An ARMA model is used for the latent process so that well-established methods in time series analysis can be used. Still the computation of the log-likelihood poses many problems because it is the sum of 2^N terms involving the Gaussian cumulative distribution function when N is the length of the time series. We consider an Monte Carlo Expectation-Maximisation (MCEM) algorithm for the maximum likelihood estimation of the model which works well for small to moderate N. Then an Approximate Bayesian Computation (ABC) method is developed to take advantage of the fact that data can be simulated easily from an ARMA model and digitised. A spectral comparison method is used in the rejection-acceptance step. This is shown to work well for large N. Finally we write the model in an R-vine copula representation and use a sequential algorithm for the computation of the log-likelihood. We evaluate the score and Hessian of the log-likelihood and give analytic solutions for the standard errors. The proposed methodologies are illustrated using simulation studies and highlight the advantages of incorporating classic ideas from time series analysis into modern methods of model fitting. For illustration we compare the three methods on US polio incidence data (Zeger, 1988) and we discuss their relative merits.
148

On Maximum Likelihood Estimation of the Concentration Parameter of von Mises-Fisher Distributions

Hornik, Kurt, Grün, Bettina 10 1900 (has links) (PDF)
Maximum likelihood estimation of the concentration parameter of von Mises-Fisher distributions involves inverting the ratio R_nu = I_{nu+1} / I_nu of modified Bessel functions. Computational issues when using approximative or iterative methods were discussed in Tanabe et al. (Comput Stat 22(1):145-157, 2007) and Sra (Comput Stat 27(1):177-190, 2012). In this paper we use Amos-type bounds for R_nu to deduce sharper bounds for the inverse function, determine the approximation error of these bounds, and use these to propose a new approximation for which the error tends to zero when the inverse of R is evaluated at values tending to 1 (from the left). We show that previously introduced rational bounds for R_nu which are invertible using quadratic equations cannot be used to improve these bounds. / Series: Research Report Series / Department of Statistics and Mathematics
149

Distribuições das classes Kumaraswamy generalizada e exponenciada: propriedades e aplicações / Distributions of the generalized Kumaraswamy and exponentiated classes: properties and applications

Antonio Carlos Ricardo Braga Junior 04 April 2013 (has links)
Recentemente, Cordeiro e de Castro (2011) apresentaram uma classe generalizada baseada na distribuição Kumaraswamy (Kw-G). Essa classe de distribuições modela as formas de risco crescente, decrescente, unimodal e forma de U ou de banheira. Uma importante distribuição pertencente a essa classe é a distribuição Kumaraswamy Weibull modificada (KwMW) proposta por Cordeiro; Ortega e Silva (2013). Com isso foi utilizada essa distribuição para o desenvolvimento de algumas novas propriedades e análise bayesiana. Além disso, foi desenvolvida uma nova distribuição de probabilidade a partir da distribuição gama generalizada geométrica (GGG) que foi denominada de gama generalizada geométrica exponenciada (GGGE). Para a nova distribuição GGGE foram calculados os momentos, a função geradora de momentos, os desvios médios, a confiabilidade e as estatísticas de ordem. Desenvolveu-se o modelo de regressão log-gama generalizada geométrica exponenciada. Para a estimação dos parâmetros, foram utilizados os métodos de máxima verossimilhança e bayesiano e, finalmente, para ilustrar a aplicação da nova distribuição foi analisado um conjunto de dados reais. / Recently, Cordeiro and de Castro (2011) showed a generalized class based on the Kumaraswamy distribution (Kw-G). This class of models has crescent risk forms, decrescent, unimodal and U or bathtub form. An important distribution belonging to this class the Kumaraswamy modified Weibull distribution (KwMW), proposed by Cordeiro; Ortega e Silva (2013). Thus this distribution was used to develop some new properties and bayesian analysis. Furthermore, we develop a new probability distribution from the generalized gamma geometric distribution (GGG) which it is called generalized gamma geometric exponentiated (GGGE) distribution. For the new distribution we calculate the moments, moment generating function, mean deviation, reliability and order statistics. We define a log-generalized gamma geometric exponentiated regression model. The methods used to estimate the model parameters are: maximum likelihood and bayesian. Finally, we illustrate the potentiality of the new distribution by means of an application to a real data set.
150

Statistical Signal Processing of ESI-TOF-MS for Biomarker Discovery

January 2012 (has links)
abstract: Signal processing techniques have been used extensively in many engineering problems and in recent years its application has extended to non-traditional research fields such as biological systems. Many of these applications require extraction of a signal or parameter of interest from degraded measurements. One such application is mass spectrometry immunoassay (MSIA) which has been one of the primary methods of biomarker discovery techniques. MSIA analyzes protein molecules as potential biomarkers using time of flight mass spectrometry (TOF-MS). Peak detection in TOF-MS is important for biomarker analysis and many other MS related application. Though many peak detection algorithms exist, most of them are based on heuristics models. One of the ways of detecting signal peaks is by deploying stochastic models of the signal and noise observations. Likelihood ratio test (LRT) detector, based on the Neyman-Pearson (NP) lemma, is an uniformly most powerful test to decision making in the form of a hypothesis test. The primary goal of this dissertation is to develop signal and noise models for the electrospray ionization (ESI) TOF-MS data. A new method is proposed for developing the signal model by employing first principles calculations based on device physics and molecular properties. The noise model is developed by analyzing MS data from careful experiments in the ESI mass spectrometer. A non-flat baseline in MS data is common. The reasons behind the formation of this baseline has not been fully comprehended. A new signal model explaining the presence of baseline is proposed, though detailed experiments are needed to further substantiate the model assumptions. Signal detection schemes based on these signal and noise models are proposed. A maximum likelihood (ML) method is introduced for estimating the signal peak amplitudes. The performance of the detection methods and ML estimation are evaluated with Monte Carlo simulation which shows promising results. An application of these methods is proposed for fractional abundance calculation for biomarker analysis, which is mathematically robust and fundamentally different than the current algorithms. Biomarker panels for type 2 diabetes and cardiovascular disease are analyzed using existing MS analysis algorithms. Finally, a support vector machine based multi-classification algorithm is developed for evaluating the biomarkers' effectiveness in discriminating type 2 diabetes and cardiovascular diseases and is shown to perform better than a linear discriminant analysis based classifier. / Dissertation/Thesis / Ph.D. Electrical Engineering 2012

Page generated in 0.0535 seconds