• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 11
  • 8
  • 3
  • 1
  • 1
  • 1
  • Tagged with
  • 32
  • 32
  • 8
  • 7
  • 6
  • 5
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Testes de superioridade para modelos de chances proporcionais com e sem fração de cura / Superiority test for proportional odds model with and without cure fraction

Juliana Cecilia da Silva Teixeira 24 October 2017 (has links)
Estudos que comprovem a superioridade de um fármaco em relação a outros já existentes no mercado são de grande interesse na prática clínica. Através deles a Agência Nacional de Vigilância Sanitária (ANVISA) concede registro a novos produtos, que podem curar mais rápido ou aumentar a probabilidade de cura dos pacientes, em comparação ao tratamento padrão. É de suma importância que os testes de hipóteses controlem a probabilidade do erro tipo I, ou seja, controlem a probabilidade de que um tratamento não superior seja aprovado para uso; e também atinja o poder de teste regulamentado com o menor número de indivíduos possível. Os testes de hipóteses existentes para esta finalidade ou desconsideram o tempo até que o evento de interesse ocorra (reação alérgica, efeito positivo, etc) ou são baseados no modelo de riscos proporcionais. No entanto, na prática, a hipótese de riscos proporcionais pode nem sempre ser satisfeita, como é o caso de ensaios cujos riscos dos diferentes grupos em estudo se igualam com o passar do tempo. Nesta situação, o modelo de chances proporcionais é mais adequado para o ajuste dos dados. Neste trabalho desenvolvemos e investigamos dois testes de hipóteses para ensaios clínicos de superioridade, baseados na comparação de curvas de sobrevivência sob a suposição de que os dados seguem o modelo de chances de sobrevivências proporcionais, um sem a incorporação da fração de cura e outro com esta incorporação. Vários estudos de simulação são conduzidos para analisar a capacidade de controle da probabilidade do erro tipo I e do valor do poder dos testes quando os dados satisfazem ou não a suposição do teste para diversos tamanhos amostrais e dois métodos de estimação das quantidades de interesse. Concluímos que a probabilidade do erro tipo I é subestimada quando os dados não satisfazem a suposição do teste e é controlada quando satisfazem, como esperado. De forma geral, concluímos que é imprescindível satisfazer as suposições dos testes de superioridade. / Studies that prove the superiority of a drug in relation to others already existing in the market are of great interest in clinical practice. Based on them the Brazilian National Agency of Sanitary Surveillance (ANVISA) grants superiority drugs registers which can cure faster or increase the probability of cure of patients, compared to standard treatment. It is of the utmost importance that hypothesis tests control the probability of type I error, that is, they control the probability that a non-superior treatment is approved for use; and also achieve the test power regulated with as few individuals as possible. Tests of hypotheses existing for this purpose or disregard the time until the event of interest occurrence (allergic reaction, positive effect, etc.) or are based on the proportional hazards model. However, in practice, the hypothesis of proportional hazards may not always be satisfied, as is the case of trials whose risks of the different study groups become equal over time. In this situation, the proportional odds survival model is more adequate for the adjustment of the data. In this work we developed and investigated two hypothesis tests for clinical trials of superiority, based on the comparison of survival curves under the assumption that the data follow the proportional survival odds model, one without the incorporation of cure fraction and another considering cure fraction. Several simulation studies are conducted to analyze the ability to control the probability of type I error and the value of the power of the tests when the data satisfy or not the assumption of the test for different sample sizes and two estimation methods of the quantities of interest. We conclude that the probability of type I error is underestimated when the data do not satisfy the assumption of the test and it is controlled when they satisfy, as expected. In general, we conclude that it is indispensable to satisfy the assumptions of superiority tests.
12

Um teste baseado em influência local para avaliar qualidade do ajuste em modelos de Regressão Beta

RIBEIRO, Terezinha Késsia de Assis 12 February 2016 (has links)
Submitted by Fabio Sobreira Campos da Costa (fabio.sobreira@ufpe.br) on 2016-07-26T12:10:38Z No. of bitstreams: 2 license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) dissertação_final_cd_TT.pdf: 4588819 bytes, checksum: 5127176322bfc06990cbd3eaa1fc5687 (MD5) / Made available in DSpace on 2016-07-26T12:10:38Z (GMT). No. of bitstreams: 2 license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) dissertação_final_cd_TT.pdf: 4588819 bytes, checksum: 5127176322bfc06990cbd3eaa1fc5687 (MD5) Previous issue date: 2016-02-12 / CAPEs / A classe de modelos de regressão beta introduzida por Ferrari & Cribari-Neto (2004) é muito útil para modelar taxas e proporções. O modelo proposto pelos autores é baseado na suposição de que a variável resposta tem distribuição beta com uma parametrização que é indexada pela média e por um parâmetro de precisão. Após a construção de um modelo de regressão é de extrema importância realizar a análise de diagnóstico, objetivando verificar possíveis afastamentos das suposições feitas para o modelo apresentado, bem como detectar possíveis observações que causem influência desproporcional nas estimativas dos parâmetros. A análise de influência local introduzida por Cook (1986) é uma abordagem que objetiva avaliar a influência das observações. Com base no método de influência local, Zhu & Zhang (2004) propuseram um teste de hipóteses para detectar o grau de discrepância entre o modelo suposto e o modelo subjacente do qual dos dados são gerados. Nesse trabalho, foi densenvolvido esse teste para o modelo de regressão beta com dispersão fixa e variável, como também, foram propostos um melhoramento nesse teste baseados na metodologia bootstrap e um novo teste, também com base em influência local, mas considerando outro esquema de perturbação, a perturbação no parâmetro de precisão no modelo de regressão beta com dispersão fixa. O desempenho desses testes foram avaliados com base no tamanho e poder. Por fim, aplicamos a teoria desenvolvida a um conjunto de dados reais. / The class of beta regression models introduced by Ferrari & Cribari-Neto (2004) is very useful for modelling rates and proportions. The proposed model by the authors is based on the assumption that the response variable is beta distributed with indexed by mean and dispersion parameters. After fitting a regression model is very important to carry out the diagnostic analysis in sense that, verifying possible deviations of the model assumptions, as well as detect possible observations that cause disproportionate influence on the parameter estimates. The local influence analysis introduced by Cook (1986) is an approach that objective assess the influence of observations. Based on local influence method, Zhu & Zhang (2004) proposed a hypothesis test to detect the degree of discrepancy between the supposed model and the underlying model from which the data is generated. In this work, was developed this test for the beta regression model with fixed and varying dispersion, as well as, we proposed in addition, an improvement of this test based on bootstrap methodology and a new test, also based on local influence, but considering other perturbation scheme, the perturbation of the precision parameter in beta regression model with fixed dispersion. The performance of these tests were evaluated based on size and power. Finally, we applied the theory developed to a set of real data.
13

Teste para avaliar a propriedade de incrementos independentes em um processo pontual / Test to evaluate the property of independent increments in a point process

Francys Andrews de Souza 26 June 2013 (has links)
Em econometria um dos tópicos que vem se tornando ao longo dos anos primordial e a análise de ultra-frequência, ou seja, a análise da transação negócio a negócio. Ela tem se mostrado fundamental na modelagem da microestrutura do mercado intraday. Ainda assim temos uma teoria escassa que vem crescendo de forma humilde a cerca deste tema. Buscamos desenvolver um teste de hipótese para verificar se os dados de ultra-frequência apresentam incrementos independentes e estacionários, pois neste cenário saber disso é de grande importância, ja que muitos trabalhos tem como base essa hipótese. Além disso Grimshaw et. al. (2005)[6] mostrou que ao utilizarmos uma distribuição de probabilidade contínua para modelarmos dados econômicos, em geral, estimamos uma função de intensidade crescente, devido a resultados viciados obtidos como consequência do arredondamento, em nosso trabalho buscamos trabalhar com distribuições discretas para que contornar esse problema acarretado pelo uso de distribuições contínuas / In econometrics a topic that is becoming primordial over the years is the ultra frequency analysis, or analysis of the trades to trades transaction. This topic is shown to be fundamental in modeling the microstructure of the market intraday. Nevertheless we have a little theory that is growing so lowly about this topic. We seek to develop a hypothesis test to verify that the data ultrasonic frequency have independent and stationary increments, for this scenario the knowledge of it great importance, since many jobs is based on this hypothesis. In general Grimshaw et. al. (2005)[6] showed that when we use a continuous probability distribution to model ecomomic data, we estimate a function of increasing intensity due to addicts results obtained as a result of rounding. In our research we seek to work with discrete distributions to circumvent this problem entailed by the use of continuous distributions
14

Method to statistically secure field test results for rock drilling tools

Olander, Frida January 2023 (has links)
The aim of this project is to find a statistical method that describes survival and outcome of a prototype drilling tool from several references and determine how many references are needed for a statistically secure result. The project also answers which variables that affects the outcome. 60 drilling bits of two different types were studied and the variables that are investigated are penetration rate, rotation speed, rotation torque, flush flow, feed force and accumulated depth. Hypothesis test is used for determining how a prototype drills compared to the references and linear regression is used to determine how the references drills. Only accumulated depth affects the survival time with a correlation of 77.7% that was improved to 89.6 % by removing big time gaps that was not of interest for the result. The accuracy of the linear regression for 30 drill bits of one type for max depth related to survival time was 80.3 %. A minimum of 20 prototypes must be tested before determining the outcome of the prototypes in comparison to the references.
15

Rogue Access Point Detection through Statistical Analysis

Kanaujia, Swati 26 May 2010 (has links)
The IEEE 802.11 based Wireless LAN (WLAN) has become increasingly ubiquitous in recent years. However, due to the broadcast nature of wireless communication, attackers can exploit the existing vulnerabilities in IEEE 802.11 to launch various types of attacks in wireless and wired networks. This thesis presents a statistical based hybrid Intrusion Detection System (IDS) for Rogue Access Point (RAP) detection, which employs distributed monitoring devices to monitor on 802.11 link layer activities and a centralized detection module at a gateway router to achieve higher accuracy in detection of rogue devices. This detection approach is scalable, non-intrusive and does not require any specialized hardware. It is designed to utilize the existing wireless LAN infrastructure and is independent of 802.11a/b/g/n. It works on passive monitoring of wired and wireless traffic, and hence is easy to manage and maintain. In addition, this approach requires monitoring a smaller number of packets for detection as compared to other detection approaches in a heterogeneous network comprised of wireless and wired subnets. Centralized detection is done at a gateway router by differentiating wired and wireless TCP traffic using Weighted Sequential Hypothesis Testing on inter-arrival time of TCP ACK-pairs. A decentralized module takes care of detection of MAC spoofing and totally relies on 802.11 beacon frames. Detection is done through analysis of the clock skew and the Received Signal Strength (RSS) as fingerprints using a naïve Bayes classifier to detect presence of rogue APs. Analysis of the system and extensive experiments in various scenarios on a real system have proven the efficiency and accuracy of the approach with few false positives/negatives and low computational and storage overhead. / Master of Science
16

Multiscale fractality with application and statistical modeling and estimation for computer experiment of nano-particle fabrication

Woo, Hin Kyeol 24 August 2012 (has links)
The first chapter proposes multifractal analysis to measure inhomogeneity of regularity of 1H-NMR spectrum using wavelet-based multifractal tools. The geometric summaries of multifractal spectrum are informative summaries, and as such employed to discriminate 1H-NMR spectra associated with different treatments. The methodology is applied to evaluate the effect of sulfur amino acids. The second part of this thesis provides essential materials for understanding engineering background of a nano-particle fabrication process. The third chapter introduces a constrained random effect model. Since there are certain combinations of process variables resulting to unproductive process outcomes, a logistic model is used to characterize such a process behavior. For the cases with productive outcomes a normal regression serves the second part of the model. Additionally, random-effects are included in both logistics and normal regression models to describe the potential spatial correlation among data. This chapter researches a way to approximate the likelihood function and to find estimates for maximizing the approximated likelihood. The last chapter presents a method to decide the sample size under multi-layer system. The multi-layer is a series of layers, which become smaller and smaller. Our focus is to decide the sample size in each layer. The sample size decision has several objectives, and the most important purpose is the sample size should be enough to give a right direction to the next layer. Specifically, the bottom layer, which is the smallest neighborhood around the optimum, should meet the tolerance requirement. Performing the hypothesis test of whether the next layer includes the optimum gives the required sample size.
17

Population SAMC, ChIP-chip Data Analysis and Beyond

Wu, Mingqi 2010 December 1900 (has links)
This dissertation research consists of two topics, population stochastics approximation Monte Carlo (Pop-SAMC) for Baysian model selection problems and ChIP-chip data analysis. The following two paragraphs give a brief introduction to each of the two topics, respectively. Although the reversible jump MCMC (RJMCMC) has the ability to traverse the space of possible models in Bayesian model selection problems, it is prone to becoming trapped into local mode, when the model space is complex. SAMC, proposed by Liang, Liu and Carroll, essentially overcomes the difficulty in dimension-jumping moves, by introducing a self-adjusting mechanism. However, this learning mechanism has not yet reached its maximum efficiency. In this dissertation, we propose a Pop-SAMC algorithm; it works on population chains of SAMC, which can provide a more efficient self-adjusting mechanism and make use of crossover operator from genetic algorithms to further increase its efficiency. Under mild conditions, the convergence of this algorithm is proved. The effectiveness of Pop-SAMC in Bayesian model selection problems is examined through a change-point identification example and a large-p linear regression variable selection example. The numerical results indicate that Pop- SAMC outperforms both the single chain SAMC and RJMCMC significantly. In the ChIP-chip data analysis study, we developed two methodologies to identify the transcription factor binding sites: Bayesian latent model and population-based test. The former models the neighboring dependence of probes by introducing a latent indicator vector; The later provides a nonparametric method for evaluation of test scores in a multiple hypothesis test by making use of population information of samples. Both methods are applied to real and simulated datasets. The numerical results indicate the Bayesian latent model can outperform the existing methods, especially when the data contain outliers, and the use of population information can significantly improve the power of multiple hypothesis tests.
18

Momentum Investment Strategies with Portfolio Optimization : A Study on Nasdaq OMX Stockholm Large Cap

Jonsson, Robin, Radeschnig, Jessica January 2014 (has links)
This report covers a study testing the possibility of adding portfolio optimization by mean-variance analysis as a tool to extend the concept of momentum strategies in contrast to naive allocation formed by Jegadeesh & Titman (1993). Further these active investment strategies are compared with a passive benchmark as well as a randomly selected portfolio over the entire study-period. The study showed that the naive allocation model outperformed the mean-variance model both economically as well as statistically. No indication where obtained for a lagged return effect when letting a mean-variance model choose weights for a quarterly holding period and the resulting investment recommendation is to follow a naive investment strategy within a momentum framework.
19

Distribution Theory of Some Nonparametric Statistics via Finite Markov Chain Imbedding Technique

Lee, Wan-Chen 16 April 2014 (has links)
The ranking method used for testing the equivalence of two distributions has been studied for decades and is widely adopted for its simplicity. However, due to the complexity of calculations, the power of the test is either estimated by normal approximation or found when an appropriate alternative is given. Here, via a Finite Markov chain imbedding (FMCI) technique, we are able to establish the marginal and joint distributions of the rank statistics considering the shift and scale parameters, respectively and simultaneously, under two continuous distribution functions. Furthermore, the procedures of distribution equivalence tests and their power functions are discussed. Numerical results of a joint distribution of two rank statistics under the standard normal distribution and the powers for a sequence of alternative normal distributions with mean from -20 to 20 and standard deviation from 1 to 9 and their reciprocals are presented. In addition, we discuss the powers of the rank statistics under the Lehmann alternatives. Wallenstein et. al. (1993, 1994) discussed power via combinatorial calculations for the scan statistic against a pulse alternative; however, unless certain proper conditions are given, computational difficulties exist. Our work extends their results and provides an alternative way to obtain the distribution of a scan statistic under various alternative conditions. An efficient and intuitive expression for the distribution as well as the power of the scan statistic are introduced via the FMCI. The numerical results of the exact power for a discrete scan statistic against various conditions are presented. Powers through the finite Markov chain imbedding method and a combinatorial algorithm for a continuous scan statistic against a pulse alternative of a higher risk for a disease on a specified subinterval time are also discussed and compared.
20

Proteomics and metabolomics in biological and medical applications

Shiryaeva, Liudmila January 2011 (has links)
Biological processes in living organisms consist of a vast number of different molecular networks and interactions, which are complex and often hidden from our understanding. This work is focused on recovery of such details for two quite distant examples: acclimation to extreme freezing tolerance in Siberian spruce (Picea obovata) and detection of proteins associated with prostate cancer. The first biological system in the study, upon P. obovata, is interesting by this species ability to adapt and sustain extremely low temperatures, such as -60⁰C or below. Despite decades of investigations, the essential features and mechanisms of the amazing ability of this species still remains unclear. To enhance knowledge about extreme freezing tolerance, the metabolome and proteome of P. obovata’s needles were collected during the tree’s acclimation period, ranging from mid August to January, and have been analyzed. The second system within this study is the plasma proteome analysis of high risk prostate cancer (PCa) patients, with and without bone metastases. PCa is one of the most common cancers among Swedish men, which can abruptly develop into an aggressive, lethal disease. The diagnostic tools, including PSA-tests, are insufficient in predicting the disease’s aggressiveness and novel prognostic markers are urgently required. Both biological systems have been analyzed following similar steps: by two-dimensional difference gel electrophoresis (2D-DIGE) techniques, followed by protein identification using mass spectrometry (MS) analysis and multivariate methods. Data processing has been utilized for searching for proteins that serve as unique indicators for characterizing the status of the systems. In addition, the gas chromatography-mass spectrometry (GC-MS) study of the metabolic content of P.obovata’s needles, from the extended observation period, has been performed. The studies of both systems, combined with thorough statistical analysis of experimental outcomes, have resulted in novel insights and features for both P. obovata and prostate cancer. In particular, it has been shown that dehydrins, Hsp70s, AAA+ ATPases, lipocalin and several proteins involved in cellular metabolism etc., can be uniquely associated with acclimation to extreme freezing in conifers. Metabolomic analysis of P. obovata needles has revealed systematic metabolic changes in carbohydrate and lipid metabolism. Substantial increase of raffinose, accumulation of desaturated fatty acids, sugar acids, sugar alcohols, amino acids and polyamines that may act as compatible solutes or cryoprotectants have all been observed during the acclimation process. Relevant proteins for prostate cancer progression and aggressiveness have been identified in the plasma proteome study, for patients with and without bone metastasis. Proteins associated with lipid transport, coagulation, inflammation and immune response have been found among them.

Page generated in 0.0569 seconds