Spelling suggestions: "subject:"likelihood ratio"" "subject:"iikelihood ratio""
111 |
傳統工業升級計畫評估的統計分析 / Statistical analysis on the evalution of a conventional industries upgrading program張仲翔, Chang, Chung Hsiung Unknown Date (has links)
工業的發達與否代表一個國家國力的強弱,故欲使我國達已開發國家之林,提昇整個工業或產業的升級,已經是刻不容緩的事。近年來,政府致力於發展新的高科技產業,同時,對於傳統工業也以獎勵或鼓勵技術升級的方式,以提昇整體產業競爭力。其中包含了所謂"傳統工業技術升級計畫"。
所以,本文欲藉助對數線型模式,針對"傳統工業技術升級計畫",來建構及解釋一些模式,並提出建議,以期傳統工業升級計畫,能更符合每個產業的要求。 / Modernization of Industry represents the powerfulness of a country. It'surgent to upgrade the inndustry, so that our country become a developed country.The government has been making every effort on new hi-tech industries lately, at the same time, the government also provide different incentives to upgradethe tradional industries. This way would increase the competitiveness of ourone of the incentives is that the government provided the so called "Conventionalindustries upgrading program"
In this paper, we use loglinear models to analyze the data given by those companies that participated "Conventional industries upgrading program". Based on the models, we shall make some suggestions and conclusions.
|
112 |
Assessment of balance control in relation to fall risk among older peopleNordin, Ellinor January 2008 (has links)
Falls and their consequences among older people are a serious medical and public health problem. Identifying individuals at risk of falling is therefore a major concern. The purpose of this thesis was to evaluate measurement tools of balance control and their predictive value when screening for fall risk in physically dependent individuals ≥65 years old living in residential care facilities, and physically independent individuals ≥75 years old living in the community. Following baseline assessments falls were monitored during six months in physically dependent individuals based on staff reports, and during one year in physically independent individuals based on self reports. In physically dependent individuals test-retest reliability of the Timed Up&Go test (TUG) was established in relation to cognitive impairment. Absolute reliability measures exposed substantial day-to-day variability in mobility performance at an individual level despite excellent relative reliability (ICC 1.1 >0.90) regardless of cognitive function (MMSE ≥10). Fifty-three percent of the participants fell at least once during follow-up. Staff judgement of their residents’ fall risk had the best prognostic value for ruling in a fall risk in individuals judged with ‘high risk’ (positive Likelihood ratio, LR+ 2.8). Timed, and subjective rating of fall risk (modified Get Up&Go test, GUG-m) were useful for ruling out a high fall risk in individuals with TUG scores <15 seconds (negative LR, LR- 0.1) and GUG-m scores of ‘no fall risk’ (LR- 0.4), however few participants achieved such scores. In physically independent individuals balance control was challenged by dual-task performances. Subsequent dual-task costs in gait (DTC), i.e. the difference between single walking and walking with a simultaneous second task, were registered using an electronic mat. Forty-eight percent of the participants fell at least once during follow-up. A small prognostic guidance for ruling in a high fall risk was found for DTC in mean step width of ≤3.7 mm with a manual task (LR+ 2.3), and a small guidance for ruling out a high fall risk with DTC in mean step width of ≤3.6 mm with a cognitive task (LR- 0.5). In cross-sectional evaluations DTC related to an increased fall risk were associated with: sub-maximal physical performance stance scores (Odds Ratio, OR, 3.2 to 3.8), lower self-reported balance confidence (OR 2.6), higher activity avoidance (OR 2.1), mobility disability (OR 4.0), and cautious walking out-door (OR 3.0). However, these other measures of physical function failed to provide any guidance to fall risk in this population of seemingly able older persons. In conclusion – Fall risk assessments may guide clinicians in two directions, either in ruling in or in ruling out a high fall risk. A single cut-off score, however, does not necessarily give guidance in both directions. Staff experienced knowledge is superior to a single assessment of mobility performance for ruling in a high fall risk. Clinicians need to consider the day-to-day variability in mobility when interpreting the TUG score of a physically dependent individual. DTC of gait can, depending on the type of secondary task, indicate a functional limitation related to an increased fall risk or a flexible capacity related to a decreased fall risk. DTC in mean step width seems to be a valid measure of balance control in physically independent older people and may be a valuable part of the physical examination of balance and gait when screening for fall risk as other measures of balance control may fail to provide any guidance of fall risk in this population.
|
113 |
Path Extraction Of Low Snr Dim Targets From Grayscale 2-d Image SequencesErguven, Sait 01 September 2006 (has links) (PDF)
In this thesis, an algorithm for visual detecting and tracking of very low SNR targets, i.e. dim targets, is developed. Image processing of single frame in time cannot be used for this aim due to the closeness of intensity spectrums of the background and target. Therefore / change detection of super pixels, a group of pixels that has sufficient statistics for likelihood ratio testing, is proposed. Super pixels that are determined as transition points are signed on a binary difference matrix and grouped by 4-Connected Labeling method. Each label is processed to find its vector movement in the next frame by Label Destruction and Centroids Mapping techniques. Candidate centroids are put into Distribution Density Function Maximization and Maximum Histogram Size Filtering methods to find the target related motion vectors. Noise related mappings are eliminated by Range and Maneuver Filtering. Geometrical centroids obtained on each frame are used as the observed target path which is put into Optimum Decoding Based Smoothing Algorithm to smooth and estimate the real target path. Optimum Decoding Based Smoothing Algorithm is based on quantization of possible states, i.e. observed target path centroids, and Viterbi Algorithm.
According to the system and observation models, metric values of all possible target paths are computed using observation and transition probabilities. The path which results in maximum metric value at the last frame is decided as the estimated target path.
|
114 |
Sequential probability ratio tests based on grouped observationsEger, Karl-Heinz, Tsoy, Evgeni Borisovich 26 June 2010 (has links) (PDF)
This paper deals with sequential likelihood ratio
tests based on grouped observations.
It is demonstrated that the method of conjugated
parameter pairs known from the non-grouped case
can be extended to the grouped case obtaining
Waldlike approximations for the OC- and ASN-
function.
For near hypotheses so-called F-optimal
groupings are recommended.
As example an SPRT
based on grouped observations for the parameter
of an exponentially distributed random variable is
considered.
|
115 |
Model-Based Optimization of Clinical Trial DesignsVong, Camille January 2014 (has links)
General attrition rates in drug development pipeline have been recognized as a necessity to shift gears towards new methodologies that allow earlier and correct decisions, and the optimal use of all information accrued throughout the process. The quantitative science of pharmacometrics using pharmacokinetic-pharmacodynamic models was identified as one of the strategies core to this renaissance. Coupled with Optimal Design (OD), they constitute together an attractive toolkit to usher more rapidly and successfully new agents to marketing approval. The general aim of this thesis was to investigate how the use of novel pharmacometric methodologies can improve the design and analysis of clinical trials within drug development. The implementation of a Monte-Carlo Mapped power method permitted to rapidly generate multiple hypotheses and to adequately compute the corresponding sample size within 1% of the time usually necessary in more traditional model-based power assessment. Allowing statistical inference across all data available and the integration of mechanistic interpretation of the models, the performance of this new methodology in proof-of-concept and dose-finding trials highlighted the possibility to reduce drastically the number of healthy volunteers and patients exposed to experimental drugs. This thesis furthermore addressed the benefits of OD in planning trials with bio analytical limits and toxicity constraints, through the development of novel optimality criteria that foremost pinpoint information and safety aspects. The use of these methodologies showed better estimation properties and robustness for the ensuing data analysis and reduced the number of patients exposed to severe toxicity by 7-fold. Finally, predictive tools for maximum tolerated dose selection in Phase I oncology trials were explored for a combination therapy characterized by main dose-limiting hematological toxicity. In this example, Bayesian and model-based approaches provided the incentive to a paradigm change away from the traditional rule-based “3+3” design algorithm. Throughout this thesis several examples have shown the possibility of streamlining clinical trials with more model-based design and analysis supports. Ultimately, efficient use of the data can elevate the probability of a successful trial and increase paramount ethical conduct.
|
116 |
Testing the compatibility of constraints for parameters of a geodetic adjustment modelLehmann, Rüdiger, Neitzel, Frank 06 August 2014 (has links) (PDF)
Geodetic adjustment models are often set up in a way that the model parameters need to fulfil certain constraints.
The normalized Lagrange multipliers have been used as a measure of the strength of constraint in such a way that
if one of them exceeds in magnitude a certain threshold then the corresponding constraint is likely to be incompatible with
the observations and the rest of the constraints. We show that these and similar measures can be deduced as test statistics of
a likelihood ratio test of the statistical hypothesis that some constraints are incompatible in the same sense. This has been
done before only for special constraints (Teunissen in Optimization and Design of Geodetic Networks, pp. 526–547,
1985). We start from the simplest case, that the full set of constraints is to be tested, and arrive at the advanced case,
that each constraint is to be tested individually. Every test is worked out both for a known as well as for an unknown
prior variance factor. The corresponding distributions under null and alternative hypotheses are derived. The theory is
illustrated by the example of a double levelled line. / Geodätische Ausgleichungsmodelle werden oft auf eine Weise formuliert, bei der die Modellparameter bestimmte Bedingungsgleichungen zu erfüllen haben. Die normierten Lagrange-Multiplikatoren wurden bisher als Maß für den ausgeübten Zwang verwendet, und zwar so, dass wenn einer von ihnen betragsmäßig eine bestimmte Schwelle übersteigt, dann ist davon auszugehen, dass die zugehörige Bedingungsgleichung nicht mit den Beobachtungen und den restlichen Bedingungsgleichungen kompatibel ist. Wir zeigen, dass diese und ähnliche Maße als Teststatistiken eines Likelihood-Quotiententests der statistischen Hypothese, dass einige Bedingungsgleichungen in diesem Sinne inkompatibel sind, abgeleitet werden können. Das wurde bisher nur für spezielle Bedingungsgleichungen getan (Teunissen in Optimization and Design of Geodetic Networks, pp. 526–547, 1985). Wir starten vom einfachsten Fall, dass die gesamte Menge der Bedingungsgleichungen getestet werden muss, und gelangen zu dem fortgeschrittenen Problem, dass jede Bedingungsgleichung individuell zu testen ist. Jeder Test wird sowohl für bekannte, wie auch für unbekannte a priori Varianzfaktoren ausgearbeitet. Die zugehörigen Verteilungen werden sowohl unter der Null- wie auch unter der Alternativhypthese abgeleitet. Die Theorie wird am Beispiel einer Doppelnivellementlinie illustriert.
|
117 |
Distribuição normal assimétrica para dados de expressão gênicaGomes, Priscila da Silva 17 April 2009 (has links)
Made available in DSpace on 2016-06-02T20:06:02Z (GMT). No. of bitstreams: 1
2390.pdf: 3256865 bytes, checksum: 7ad1acbefc5f29dddbaad3f14dbcef7c (MD5)
Previous issue date: 2009-04-17 / Financiadora de Estudos e Projetos / Microarrays technologies are used to measure the expression levels of a large amount of genes or fragments of genes simultaneously in diferent situations. This technology is useful to determine genes that are responsible for genetic diseases. A common statistical methodology used to determine whether a gene g has evidences to diferent expression levels is the t-test which requires the assumption of normality for the data
(Saraiva, 2006; Baldi & Long, 2001). However this assumption sometimes does not agree with the nature of the analyzed data. In this work we use the skew-normal distribution
described formally by Azzalini (1985), which has the normal distribution as a particular case, in order to relax the assumption of normality. Considering a frequentist approach
we made a simulation study to detect diferences between the gene expression levels in situations of control and treatment through the t-test. Another simulation was made to
examine the power of the t-test when we assume an asymmetrical model for the data. Also we used the likelihood ratio test to verify the adequability of an asymmetrical model
for the data. / Os microarrays são ferramentas utilizadas para medir os níveis de expressão de uma grande quantidade de genes ou fragmentos de genes simultaneamente em situações variadas. Com esta ferramenta é possível determinar possíveis genes causadores de doenças de origem genética. Uma abordagem estatística comumente utilizada para determinar se um gene g apresenta evidências para níveis de expressão diferentes consiste no teste t, que exige a suposição de normalidade aos dados (Saraiva, 2006; Baldi & Long, 2001). No entanto, esta suposição pode não condizer com a natureza dos dados analisados. Neste trabalho, será utilizada a distribuição normal assimétrica descrita formalmente por Azzalini (1985), que tem a distribuição normal como caso particular, com o intuito de
flexibilizar a suposição de normalidade. Considerando a abordagem clássica, é realizado um estudo de simulação para detectar diferenças entre os níveis de expressão gênica em
situações de controle e tratamento através do teste t, também é considerado um estudo de simulação para analisar o poder do teste t quando é assumido um modelo assimétrico
para o conjunto de dados. Também é realizado o teste da razão de verossimilhança, para verificar se o ajuste de um modelo assimétrico aos dados é adequado.
|
118 |
Testes em modelos weibull na forma estendida de Marshall-OlkinMagalh?es, Felipe Henrique Alves 28 December 2011 (has links)
Made available in DSpace on 2015-03-03T15:28:32Z (GMT). No. of bitstreams: 1
FelipeHAM_DISSERT.pdf: 2307848 bytes, checksum: c94e3d62e5fe54424d6cbe1491c8d85d (MD5)
Previous issue date: 2011-12-28 / Universidade Federal do Rio Grande do Norte / In survival analysis, the response is usually the time until the occurrence of an event of interest,
called failure time. The main characteristic of survival data is the presence of censoring which
is a partial observation of response. Associated with this information, some models occupy an
important position by properly fit several practical situations, among which we can mention
the Weibull model. Marshall-Olkin extended form distributions other a basic generalization that
enables greater
exibility in adjusting lifetime data. This paper presents a simulation study that
compares the gradient test and the likelihood ratio test using the Marshall-Olkin extended form
Weibull distribution. As a result, there is only a small advantage for the likelihood ratio test / Em an?lise de sobreviv?ncia, a vari?vel resposta e, geralmente, o tempo at? a ocorr?ncia de um evento de interesse, denominado tempo de falha, e a principal caracter?stica de dados de sobreviv?ncia e a presen?a de censura, que ? a observa??o parcial da resposta. Associados a essas
informa??es, alguns modelos ocupam uma posi??o de destaque por sua comprovada adequa??o a v?rias situa??es pr?ticas, entre os quais ? poss?vel citar o modelo Weibull. Distribui??es na forma estendida de Marshall-Olkin oferecem uma generaliza??o de distribui??es b?sicas que permitem uma flexibilidade maior no ajuste de dados de tempo de vida. Este trabalho apresenta um estudo de simula??o que compara duas estat?sticas de teste, a da Raz?o de Verossimilhan?as e a
Gradiente, utilizando a distribui??o Weibull em sua forma estendida de Marshall-Olkin. Como resultado, verifica-se apenas uma pequena vantagem para estat?stica da Raz?o de Verossimilhancas
|
119 |
Estimação e teste de hipótese baseados em verossimilhanças perfiladas / "Point estimation and hypothesis test based on profile likelihoods"Michel Ferreira da Silva 20 May 2005 (has links)
Tratar a função de verossimilhança perfilada como uma verossimilhança genuína pode levar a alguns problemas, como, por exemplo, inconsistência e ineficiência dos estimadores de máxima verossimilhança. Outro problema comum refere-se à aproximação usual da distribuição da estatística da razão de verossimilhanças pela distribuição qui-quadrado, que, dependendo da quantidade de parâmetros de perturbação, pode ser muito pobre. Desta forma, torna-se importante obter ajustes para tal função. Vários pesquisadores, incluindo Barndorff-Nielsen (1983,1994), Cox e Reid (1987,1992), McCullagh e Tibshirani (1990) e Stern (1997), propuseram modificações à função de verossimilhança perfilada. Tais ajustes consistem na incorporação de um termo à verossimilhança perfilada anteriormente à estimação e têm o efeito de diminuir os vieses da função escore e da informação. Este trabalho faz uma revisão desses ajustes e das aproximações para o ajuste de Barndorff-Nielsen (1983,1994) descritas em Severini (2000a). São apresentadas suas derivações, bem como suas propriedades. Para ilustrar suas aplicações, são derivados tais ajustes no contexto da família exponencial biparamétrica. Resultados de simulações de Monte Carlo são apresentados a fim de avaliar os desempenhos dos estimadores de máxima verossimilhança e dos testes da razão de verossimilhanças baseados em tais funções. Também são apresentadas aplicações dessas funções de verossimilhança em modelos não pertencentes à família exponencial biparamétrica, mais precisamente, na família de distribuições GA0(alfa,gama,L), usada para modelar dados de imagens de radar, e no modelo de Weibull, muito usado em aplicações da área da engenharia denominada confiabilidade, considerando dados completos e censurados. Aqui também foram obtidos resultados numéricos a fim de avaliar a qualidade dos ajustes sobre a verossimilhança perfilada, analogamente às simulações realizadas para a família exponencial biparamétrica. Vale mencionar que, no caso da família de distribuições GA0(alfa,gama,L), foi avaliada a aproximação da distribuição da estatística da razão de verossimilhanças sinalizada pela distribuição normal padrão. Além disso, no caso do modelo de Weibull, vale destacar que foram derivados resultados distribucionais relativos aos estimadores de máxima verossimilhança e às estatísticas da razão de verossimilhanças para dados completos e censurados, apresentados em apêndice. / The profile likelihood function is not genuine likelihood function, and profile maximum likelihood estimators are typically inefficient and inconsistent. Additionally, the null distribution of the likelihood ratio test statistic can be poorly approximated by the asymptotic chi-squared distribution in finite samples when there are nuisance parameters. It is thus important to obtain adjustments to the likelihood function. Several authors, including Barndorff-Nielsen (1983,1994), Cox and Reid (1987,1992), McCullagh and Tibshirani (1990) and Stern (1997), have proposed modifications to the profile likelihood function. They are defined in a such a way to reduce the score and information biases. In this dissertation, we review several profile likelihood adjustments and also approximations to the adjustments proposed by Barndorff-Nielsen (1983,1994), also described in Severini (2000a). We present derivations and the main properties of the different adjustments. We also obtain adjustments for likelihood-based inference in the two-parameter exponential family. Numerical results on estimation and testing are provided. We also consider models that do not belong to the two-parameter exponential family: the GA0(alfa,gama,L) family, which is commonly used to model image radar data, and the Weibull model, which is useful for reliability studies, the latter under both noncensored and censored data. Again, extensive numerical results are provided. It is noteworthy that, in the context of the GA0(alfa,gama,L) model, we have evaluated the approximation of the null distribution of the signalized likelihood ratio statistic by the standard normal distribution. Additionally, we have obtained distributional results for the Weibull case concerning the maximum likelihood estimators and the likelihood ratio statistic both for noncensored and censored data.
|
120 |
MELHORAMENTOS INFERENCIAIS NO MODELO BETA-SKEW-T-EGARCH / INFERENTIAL IMPROVEMENTS OF BETA-SKEW-T-EGARCH MODELMuller, Fernanda Maria 25 February 2016 (has links)
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / The Beta-Skew-t-EGARCH model was recently proposed in literature to model the
volatility of financial returns. The inferences over the model parameters are based on the maximum
likelihood method. The maximum likelihood estimators present good asymptotic properties;
however, in finite sample sizes they can be considerably biased. Monte Carlo simulations
were used to evaluate the finite sample performance of point estimators. Numerical results indicated
that the maximum likelihood estimators of some parameters are biased in sample sizes
smaller than 3,000. Thus, bootstrap bias correction procedures were considered to obtain more
accurate estimators in small samples. Better quality of forecasts was observed when the model
with bias-corrected estimators was considered. In addition, we propose a likelihood ratio test
to assist in the selection of the Beta-Skew-t-EGARCH model with one or two volatility components.
The numerical evaluation of the two-component test showed distorted null rejection
rates in sample sizes smaller than or equal to 1,000. To improve the performance of the proposed
test in small samples, the bootstrap-based likelihood ratio test and the bootstrap Bartlett
correction were considered. The bootstrap-based test exhibited the closest null rejection rates
to the nominal values. The evaluation results of the two-component tests showed their practical
usefulness. Finally, an application to the log-returns of the German stock index of the proposed
methods was presented. / O modelo Beta-Skew-t-EGARCH foi recentemente proposto para modelar a volatilidade
de retornos financeiros. A estimação dos parâmetros do modelo é feita via máxima verossimilhança.
Esses estimadores possuem boas propriedades assintóticas, mas em amostras
de tamanho finito eles podem ser consideravelmente viesados. Com a finalidade de avaliar as
propriedades dos estimadores, em amostras de tamanho finito, realizou-se um estudo de simulações
de Monte Carlo. Os resultados numéricos indicam que os estimadores de máxima
verossimilhança de alguns parâmetros do modelo são viesados em amostras de tamanho inferior
a 3000. Para obter estimadores pontuais mais acurados foram consideradas correções de
viés via o método bootstrap. Verificou-se que os estimadores corrigidos apresentaram menor
viés relativo percentual. Também foi observada melhor qualidade das previsões quando o modelo
com estimadores corrigidos são considerados. Para auxiliar na seleção entre o modelo
Beta-Skew-t-EGARCH com um ou dois componentes de volatilidade foi apresentado um teste
da razão de verossimilhanças. A avaliação numérica do teste de dois componentes proposto demonstrou
taxas de rejeição nula distorcidas em tamanhos amostrais menores ou iguais a 1000.
Para melhorar o desempenho do teste foram consideradas a correção bootstrap e a correção de
Bartlett bootstrap. Os resultados numéricos indicam a utilidade prática dos testes de dois componentes
propostos. O teste bootstrap exibiu taxas de rejeição nula mais próximas dos valores
nominais. Ao final do trabalho foi realizada uma aplicação dos testes de dois componentes e
do modelo Beta-Skew-t-EGARCH, bem como suas versões corrigidas, a dados do índice de
mercado da Alemanha.
|
Page generated in 0.1083 seconds