• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2
  • 2
  • 1
  • Tagged with
  • 6
  • 6
  • 6
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Models for target detection times.

Bae, Deok Hwan January 1989 (has links)
Approved for public release; distribution in unlimited. / Some battlefield models have a component in them which models the time it takes for an observer to detect a target. Different observers may have different mean detection times due to various factors such as the type of sensor used, environmental conditions, fatigue of the observer, etc. Two parametric models for the distribution of time to target detection are considered which can incorporate these factors. Maximum likelihood estimation procedures for the parameters are described. Results of simulation experiments to study the small sample behavior of the estimators are presented. / http://archive.org/details/modelsfortargetd00baed / Major, Korean Air Force
2

LIKELIHOOD INFERENCE FOR LOG-LOGISTIC DISTRIBUTION UNDER PROGRESSIVE TYPE-II RIGHT CENSORING

Alzahrani, Alya 10 1900 (has links)
<p>Censoring arises quite often in lifetime data. Its presence may be planned or unplanned. In this project, we demonstrate progressive Type-II right censoring when the underlying distribution is log-logistic. The objective is to discuss inferential methods for the unknown parameters of the distribution based on the maximum likelihood estimation method. The Newton-Raphson method is proposed as a numerical technique to solve the pertinent non-linear equations. In addition, confidence intervals for the unknown parameters are constructed based on (i) asymptotic normality of the maximum likelihood estimates, and (ii) percentile bootstrap resampling technique. A Monte Carlo simulation study is conducted to evaluate the performance of the methods of inference developed here. Some illustrative examples are also presented.</p> / Master of Science (MSc)
3

Estimation of Technical Efficiency in Stochastic Frontier Analysis

Nguyen, Ngoc B. 03 August 2010 (has links)
No description available.
4

Estimativas de máxima verosimilhança e bayesianas do número de erros de um software.

Silva, Karolina Barone Ribeiro da 24 February 2006 (has links)
Made available in DSpace on 2016-06-02T20:05:58Z (GMT). No. of bitstreams: 1 DissKBRS.pdf: 617246 bytes, checksum: 9436ee8984a49f5df072023b717747c6 (MD5) Previous issue date: 2006-02-24 / In this work we present the methodology of capture-recapture, under the classic and bayesian approach, to estimate the number of errors of software through inspection by distinct reviewers. We present the general statistical model considering independence among errors and among reviewers and consider the particular cases of equally detectable errors (homogeneous) and reviewers not equally e¢ cient (heterogeneous) and of errors not equally detectable (heterogeneous) and equally e¢ cient reviewers (homogeneous). After that, under the assumption of independence and heterogeneity among errors and independence and homogeneity among reviwers, we supposed that the heterogeneity of the errors was expressed by a classification of these in easy and di¢ cult of detecting, admitting known the probabilities of detection of an easy error and of a di¢ cult error. Finally, under the hypothesis of independence and homogeneity among errors, we presented a new model considering heterogeneity and dependence among reviewers. Besides, we presented examples with simulate and real data. / Nesta dissertação apresentamos a metodologia de captura-recaptura, sob os enfoques clássico e bayesiano, para estimar o número de erros de um software através de sua inspeção por revisores distintos. Apresentamos o modelo estatístico geral considerando independência entre erros e entre revisores e consideramos os casos particulares de erros igualmente.detectáveis (homogêneos) e revisores não igualmente eficientes (heterogêneos) e de erros não igualmente detectáveis (heterogêneos) e revisores igualmente eficientes (homogêneos). Em seguida, sob a hipótese de heterogeneidade e independência entre erros e homogeneidade e independência entre revisores, supusemos que a heterogeneidade dos erros era expressa por uma classificação destes em fácil e difícil de detectar, admitindo conhecidas as probabilidades de detecção de um erro fácil e de um erro difícil. Finalmente, sob a hipótese de independência e homogeneidade entre erros, apresentamos um novo modelo considerando heterogeneidade e dependência entre revisores. Além disso, apresentamos exemplos com dados simulados e reais.
5

Eliminação de parâmetros perturbadores na estimação de tamanhos populacionais

Festucci, Ana Claudia 15 January 2010 (has links)
Made available in DSpace on 2016-06-02T20:06:04Z (GMT). No. of bitstreams: 1 2751.pdf: 886213 bytes, checksum: 2f07f7329a7f25f1759ddb5d7a6edd66 (MD5) Previous issue date: 2010-01-15 / Financiadora de Estudos e Projetos / In this study, we used the capture-recapture procedure to estimate the size of a closed population. We analysed three di_erent statistics models. For each one of these models we determined - through several methods of eliminating nuisance parameters - the likelihood function and the pro_le, conditional, uniform integrated, Je_reys integrated and generalized integrated likelihood functions of the population size, except for the last model where we determined a function that is analogous to the conditional likelihood function, called integrated restricted likelihood function. In each instance we determined the respectives maximum likelihood estimates, the empirical con_dence intervals and the empirical mean squared errors of the estimates for the population size and we studied, using simulated data, the performances of the models. / Nesta dissertação utilizamos o processo de captura-recaptura para estimar o tamanho de uma população fechada. Analisamos três modelos estatísticos diferentes e, para cada um deles, através de diversas metodologias de eliminação de parâmetros perturbadores, determinamos as funções de verossimilhança e de verossimilhança perfilada, condicional, integrada uniforme, integrada de Jeffreys e integrada generalizada do tamanho populacional, com exceção do último modelo onde determinamos uma função análoga à função de verossimilhança condicional, denominada função de verossimilhança restrita integrada. Em cada capítulo determinamos as respectivas estimativas de máxima verossimilhança e construímos intervalos de confiança empíricos para o tamanho populacional, bem como determinamos os erros quadráticos médios empíricos das estimativas e estudamos, através de dados simulados, as performances dos modelos.
6

Recognition Of Complex Events In Open-source Web-scale Videos: Features, Intermediate Representations And Their Temporal Interactions

Bhattacharya, Subhabrata 01 January 2013 (has links)
Recognition of complex events in consumer uploaded Internet videos, captured under realworld settings, has emerged as a challenging area of research across both computer vision and multimedia community. In this dissertation, we present a systematic decomposition of complex events into hierarchical components and make an in-depth analysis of how existing research are being used to cater to various levels of this hierarchy and identify three key stages where we make novel contributions, keeping complex events in focus. These are listed as follows: (a) Extraction of novel semi-global features – firstly, we introduce a Lie-algebra based representation of dominant camera motion present while capturing videos and show how this can be used as a complementary feature for video analysis. Secondly, we propose compact clip level descriptors of a video based on covariance of appearance and motion features which we further use in a sparse coding framework to recognize realistic actions and gestures. (b) Construction of intermediate representations – We propose an efficient probabilistic representation from low-level features computed from videos, based on Maximum Likelihood Estimates which demonstrates state of the art performance in large scale visual concept detection, and finally, (c) Modeling temporal interactions between intermediate concepts – Using block Hankel matrices and harmonic analysis of slowly evolving Linear Dynamical Systems, we propose two new discriminative feature spaces for complex event recognition and demonstrate significantly improved recognition rates over previously proposed approaches.

Page generated in 0.3857 seconds