• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 71
  • 23
  • 6
  • 6
  • 6
  • 5
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 146
  • 146
  • 93
  • 34
  • 31
  • 24
  • 23
  • 23
  • 21
  • 20
  • 18
  • 17
  • 17
  • 17
  • 16
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
111

Assessment of balance control in relation to fall risk among older people

Nordin, Ellinor January 2008 (has links)
Falls and their consequences among older people are a serious medical and public health problem. Identifying individuals at risk of falling is therefore a major concern. The purpose of this thesis was to evaluate measurement tools of balance control and their predictive value when screening for fall risk in physically dependent individuals ≥65 years old living in residential care facilities, and physically independent individuals ≥75 years old living in the community. Following baseline assessments falls were monitored during six months in physically dependent individuals based on staff reports, and during one year in physically independent individuals based on self reports. In physically dependent individuals test-retest reliability of the Timed Up&Go test (TUG) was established in relation to cognitive impairment. Absolute reliability measures exposed substantial day-to-day variability in mobility performance at an individual level despite excellent relative reliability (ICC 1.1 >0.90) regardless of cognitive function (MMSE ≥10). Fifty-three percent of the participants fell at least once during follow-up. Staff judgement of their residents’ fall risk had the best prognostic value for ruling in a fall risk in individuals judged with ‘high risk’ (positive Likelihood ratio, LR+ 2.8). Timed, and subjective rating of fall risk (modified Get Up&Go test, GUG-m) were useful for ruling out a high fall risk in individuals with TUG scores <15 seconds (negative LR, LR- 0.1) and GUG-m scores of ‘no fall risk’ (LR- 0.4), however few participants achieved such scores. In physically independent individuals balance control was challenged by dual-task performances. Subsequent dual-task costs in gait (DTC), i.e. the difference between single walking and walking with a simultaneous second task, were registered using an electronic mat. Forty-eight percent of the participants fell at least once during follow-up. A small prognostic guidance for ruling in a high fall risk was found for DTC in mean step width of ≤3.7 mm with a manual task (LR+ 2.3), and a small guidance for ruling out a high fall risk with DTC in mean step width of ≤3.6 mm with a cognitive task (LR- 0.5). In cross-sectional evaluations DTC related to an increased fall risk were associated with: sub-maximal physical performance stance scores (Odds Ratio, OR, 3.2 to 3.8), lower self-reported balance confidence (OR 2.6), higher activity avoidance (OR 2.1), mobility disability (OR 4.0), and cautious walking out-door (OR 3.0). However, these other measures of physical function failed to provide any guidance to fall risk in this population of seemingly able older persons. In conclusion – Fall risk assessments may guide clinicians in two directions, either in ruling in or in ruling out a high fall risk. A single cut-off score, however, does not necessarily give guidance in both directions. Staff experienced knowledge is superior to a single assessment of mobility performance for ruling in a high fall risk. Clinicians need to consider the day-to-day variability in mobility when interpreting the TUG score of a physically dependent individual. DTC of gait can, depending on the type of secondary task, indicate a functional limitation related to an increased fall risk or a flexible capacity related to a decreased fall risk. DTC in mean step width seems to be a valid measure of balance control in physically independent older people and may be a valuable part of the physical examination of balance and gait when screening for fall risk as other measures of balance control may fail to provide any guidance of fall risk in this population.
112

Path Extraction Of Low Snr Dim Targets From Grayscale 2-d Image Sequences

Erguven, Sait 01 September 2006 (has links) (PDF)
In this thesis, an algorithm for visual detecting and tracking of very low SNR targets, i.e. dim targets, is developed. Image processing of single frame in time cannot be used for this aim due to the closeness of intensity spectrums of the background and target. Therefore / change detection of super pixels, a group of pixels that has sufficient statistics for likelihood ratio testing, is proposed. Super pixels that are determined as transition points are signed on a binary difference matrix and grouped by 4-Connected Labeling method. Each label is processed to find its vector movement in the next frame by Label Destruction and Centroids Mapping techniques. Candidate centroids are put into Distribution Density Function Maximization and Maximum Histogram Size Filtering methods to find the target related motion vectors. Noise related mappings are eliminated by Range and Maneuver Filtering. Geometrical centroids obtained on each frame are used as the observed target path which is put into Optimum Decoding Based Smoothing Algorithm to smooth and estimate the real target path. Optimum Decoding Based Smoothing Algorithm is based on quantization of possible states, i.e. observed target path centroids, and Viterbi Algorithm. According to the system and observation models, metric values of all possible target paths are computed using observation and transition probabilities. The path which results in maximum metric value at the last frame is decided as the estimated target path.
113

Sequential probability ratio tests based on grouped observations

Eger, Karl-Heinz, Tsoy, Evgeni Borisovich 26 June 2010 (has links) (PDF)
This paper deals with sequential likelihood ratio tests based on grouped observations. It is demonstrated that the method of conjugated parameter pairs known from the non-grouped case can be extended to the grouped case obtaining Waldlike approximations for the OC- and ASN- function. For near hypotheses so-called F-optimal groupings are recommended. As example an SPRT based on grouped observations for the parameter of an exponentially distributed random variable is considered.
114

Model-Based Optimization of Clinical Trial Designs

Vong, Camille January 2014 (has links)
General attrition rates in drug development pipeline have been recognized as a necessity to shift gears towards new methodologies that allow earlier and correct decisions, and the optimal use of all information accrued throughout the process. The quantitative science of pharmacometrics using pharmacokinetic-pharmacodynamic models was identified as one of the strategies core to this renaissance. Coupled with Optimal Design (OD), they constitute together an attractive toolkit to usher more rapidly and successfully new agents to marketing approval. The general aim of this thesis was to investigate how the use of novel pharmacometric methodologies can improve the design and analysis of clinical trials within drug development. The implementation of a Monte-Carlo Mapped power method permitted to rapidly generate multiple hypotheses and to adequately compute the corresponding sample size within 1% of the time usually necessary in more traditional model-based power assessment. Allowing statistical inference across all data available and the integration of mechanistic interpretation of the models, the performance of this new methodology in proof-of-concept and dose-finding trials highlighted the possibility to reduce drastically the number of healthy volunteers and patients exposed to experimental drugs. This thesis furthermore addressed the benefits of OD in planning trials with bio analytical limits and toxicity constraints, through the development of novel optimality criteria that foremost pinpoint information and safety aspects. The use of these methodologies showed better estimation properties and robustness for the ensuing data analysis and reduced the number of patients exposed to severe toxicity by 7-fold.  Finally, predictive tools for maximum tolerated dose selection in Phase I oncology trials were explored for a combination therapy characterized by main dose-limiting hematological toxicity. In this example, Bayesian and model-based approaches provided the incentive to a paradigm change away from the traditional rule-based “3+3” design algorithm. Throughout this thesis several examples have shown the possibility of streamlining clinical trials with more model-based design and analysis supports. Ultimately, efficient use of the data can elevate the probability of a successful trial and increase paramount ethical conduct.
115

Testing the compatibility of constraints for parameters of a geodetic adjustment model

Lehmann, Rüdiger, Neitzel, Frank 06 August 2014 (has links) (PDF)
Geodetic adjustment models are often set up in a way that the model parameters need to fulfil certain constraints. The normalized Lagrange multipliers have been used as a measure of the strength of constraint in such a way that if one of them exceeds in magnitude a certain threshold then the corresponding constraint is likely to be incompatible with the observations and the rest of the constraints. We show that these and similar measures can be deduced as test statistics of a likelihood ratio test of the statistical hypothesis that some constraints are incompatible in the same sense. This has been done before only for special constraints (Teunissen in Optimization and Design of Geodetic Networks, pp. 526–547, 1985). We start from the simplest case, that the full set of constraints is to be tested, and arrive at the advanced case, that each constraint is to be tested individually. Every test is worked out both for a known as well as for an unknown prior variance factor. The corresponding distributions under null and alternative hypotheses are derived. The theory is illustrated by the example of a double levelled line. / Geodätische Ausgleichungsmodelle werden oft auf eine Weise formuliert, bei der die Modellparameter bestimmte Bedingungsgleichungen zu erfüllen haben. Die normierten Lagrange-Multiplikatoren wurden bisher als Maß für den ausgeübten Zwang verwendet, und zwar so, dass wenn einer von ihnen betragsmäßig eine bestimmte Schwelle übersteigt, dann ist davon auszugehen, dass die zugehörige Bedingungsgleichung nicht mit den Beobachtungen und den restlichen Bedingungsgleichungen kompatibel ist. Wir zeigen, dass diese und ähnliche Maße als Teststatistiken eines Likelihood-Quotiententests der statistischen Hypothese, dass einige Bedingungsgleichungen in diesem Sinne inkompatibel sind, abgeleitet werden können. Das wurde bisher nur für spezielle Bedingungsgleichungen getan (Teunissen in Optimization and Design of Geodetic Networks, pp. 526–547, 1985). Wir starten vom einfachsten Fall, dass die gesamte Menge der Bedingungsgleichungen getestet werden muss, und gelangen zu dem fortgeschrittenen Problem, dass jede Bedingungsgleichung individuell zu testen ist. Jeder Test wird sowohl für bekannte, wie auch für unbekannte a priori Varianzfaktoren ausgearbeitet. Die zugehörigen Verteilungen werden sowohl unter der Null- wie auch unter der Alternativhypthese abgeleitet. Die Theorie wird am Beispiel einer Doppelnivellementlinie illustriert.
116

Distribuição normal assimétrica para dados de expressão gênica

Gomes, Priscila da Silva 17 April 2009 (has links)
Made available in DSpace on 2016-06-02T20:06:02Z (GMT). No. of bitstreams: 1 2390.pdf: 3256865 bytes, checksum: 7ad1acbefc5f29dddbaad3f14dbcef7c (MD5) Previous issue date: 2009-04-17 / Financiadora de Estudos e Projetos / Microarrays technologies are used to measure the expression levels of a large amount of genes or fragments of genes simultaneously in diferent situations. This technology is useful to determine genes that are responsible for genetic diseases. A common statistical methodology used to determine whether a gene g has evidences to diferent expression levels is the t-test which requires the assumption of normality for the data (Saraiva, 2006; Baldi & Long, 2001). However this assumption sometimes does not agree with the nature of the analyzed data. In this work we use the skew-normal distribution described formally by Azzalini (1985), which has the normal distribution as a particular case, in order to relax the assumption of normality. Considering a frequentist approach we made a simulation study to detect diferences between the gene expression levels in situations of control and treatment through the t-test. Another simulation was made to examine the power of the t-test when we assume an asymmetrical model for the data. Also we used the likelihood ratio test to verify the adequability of an asymmetrical model for the data. / Os microarrays são ferramentas utilizadas para medir os níveis de expressão de uma grande quantidade de genes ou fragmentos de genes simultaneamente em situações variadas. Com esta ferramenta é possível determinar possíveis genes causadores de doenças de origem genética. Uma abordagem estatística comumente utilizada para determinar se um gene g apresenta evidências para níveis de expressão diferentes consiste no teste t, que exige a suposição de normalidade aos dados (Saraiva, 2006; Baldi & Long, 2001). No entanto, esta suposição pode não condizer com a natureza dos dados analisados. Neste trabalho, será utilizada a distribuição normal assimétrica descrita formalmente por Azzalini (1985), que tem a distribuição normal como caso particular, com o intuito de flexibilizar a suposição de normalidade. Considerando a abordagem clássica, é realizado um estudo de simulação para detectar diferenças entre os níveis de expressão gênica em situações de controle e tratamento através do teste t, também é considerado um estudo de simulação para analisar o poder do teste t quando é assumido um modelo assimétrico para o conjunto de dados. Também é realizado o teste da razão de verossimilhança, para verificar se o ajuste de um modelo assimétrico aos dados é adequado.
117

Testes em modelos weibull na forma estendida de Marshall-Olkin

Magalh?es, Felipe Henrique Alves 28 December 2011 (has links)
Made available in DSpace on 2015-03-03T15:28:32Z (GMT). No. of bitstreams: 1 FelipeHAM_DISSERT.pdf: 2307848 bytes, checksum: c94e3d62e5fe54424d6cbe1491c8d85d (MD5) Previous issue date: 2011-12-28 / Universidade Federal do Rio Grande do Norte / In survival analysis, the response is usually the time until the occurrence of an event of interest, called failure time. The main characteristic of survival data is the presence of censoring which is a partial observation of response. Associated with this information, some models occupy an important position by properly fit several practical situations, among which we can mention the Weibull model. Marshall-Olkin extended form distributions other a basic generalization that enables greater exibility in adjusting lifetime data. This paper presents a simulation study that compares the gradient test and the likelihood ratio test using the Marshall-Olkin extended form Weibull distribution. As a result, there is only a small advantage for the likelihood ratio test / Em an?lise de sobreviv?ncia, a vari?vel resposta e, geralmente, o tempo at? a ocorr?ncia de um evento de interesse, denominado tempo de falha, e a principal caracter?stica de dados de sobreviv?ncia e a presen?a de censura, que ? a observa??o parcial da resposta. Associados a essas informa??es, alguns modelos ocupam uma posi??o de destaque por sua comprovada adequa??o a v?rias situa??es pr?ticas, entre os quais ? poss?vel citar o modelo Weibull. Distribui??es na forma estendida de Marshall-Olkin oferecem uma generaliza??o de distribui??es b?sicas que permitem uma flexibilidade maior no ajuste de dados de tempo de vida. Este trabalho apresenta um estudo de simula??o que compara duas estat?sticas de teste, a da Raz?o de Verossimilhan?as e a Gradiente, utilizando a distribui??o Weibull em sua forma estendida de Marshall-Olkin. Como resultado, verifica-se apenas uma pequena vantagem para estat?stica da Raz?o de Verossimilhancas
118

Estimação e teste de hipótese baseados em verossimilhanças perfiladas / "Point estimation and hypothesis test based on profile likelihoods"

Michel Ferreira da Silva 20 May 2005 (has links)
Tratar a função de verossimilhança perfilada como uma verossimilhança genuína pode levar a alguns problemas, como, por exemplo, inconsistência e ineficiência dos estimadores de máxima verossimilhança. Outro problema comum refere-se à aproximação usual da distribuição da estatística da razão de verossimilhanças pela distribuição qui-quadrado, que, dependendo da quantidade de parâmetros de perturbação, pode ser muito pobre. Desta forma, torna-se importante obter ajustes para tal função. Vários pesquisadores, incluindo Barndorff-Nielsen (1983,1994), Cox e Reid (1987,1992), McCullagh e Tibshirani (1990) e Stern (1997), propuseram modificações à função de verossimilhança perfilada. Tais ajustes consistem na incorporação de um termo à verossimilhança perfilada anteriormente à estimação e têm o efeito de diminuir os vieses da função escore e da informação. Este trabalho faz uma revisão desses ajustes e das aproximações para o ajuste de Barndorff-Nielsen (1983,1994) descritas em Severini (2000a). São apresentadas suas derivações, bem como suas propriedades. Para ilustrar suas aplicações, são derivados tais ajustes no contexto da família exponencial biparamétrica. Resultados de simulações de Monte Carlo são apresentados a fim de avaliar os desempenhos dos estimadores de máxima verossimilhança e dos testes da razão de verossimilhanças baseados em tais funções. Também são apresentadas aplicações dessas funções de verossimilhança em modelos não pertencentes à família exponencial biparamétrica, mais precisamente, na família de distribuições GA0(alfa,gama,L), usada para modelar dados de imagens de radar, e no modelo de Weibull, muito usado em aplicações da área da engenharia denominada confiabilidade, considerando dados completos e censurados. Aqui também foram obtidos resultados numéricos a fim de avaliar a qualidade dos ajustes sobre a verossimilhança perfilada, analogamente às simulações realizadas para a família exponencial biparamétrica. Vale mencionar que, no caso da família de distribuições GA0(alfa,gama,L), foi avaliada a aproximação da distribuição da estatística da razão de verossimilhanças sinalizada pela distribuição normal padrão. Além disso, no caso do modelo de Weibull, vale destacar que foram derivados resultados distribucionais relativos aos estimadores de máxima verossimilhança e às estatísticas da razão de verossimilhanças para dados completos e censurados, apresentados em apêndice. / The profile likelihood function is not genuine likelihood function, and profile maximum likelihood estimators are typically inefficient and inconsistent. Additionally, the null distribution of the likelihood ratio test statistic can be poorly approximated by the asymptotic chi-squared distribution in finite samples when there are nuisance parameters. It is thus important to obtain adjustments to the likelihood function. Several authors, including Barndorff-Nielsen (1983,1994), Cox and Reid (1987,1992), McCullagh and Tibshirani (1990) and Stern (1997), have proposed modifications to the profile likelihood function. They are defined in a such a way to reduce the score and information biases. In this dissertation, we review several profile likelihood adjustments and also approximations to the adjustments proposed by Barndorff-Nielsen (1983,1994), also described in Severini (2000a). We present derivations and the main properties of the different adjustments. We also obtain adjustments for likelihood-based inference in the two-parameter exponential family. Numerical results on estimation and testing are provided. We also consider models that do not belong to the two-parameter exponential family: the GA0(alfa,gama,L) family, which is commonly used to model image radar data, and the Weibull model, which is useful for reliability studies, the latter under both noncensored and censored data. Again, extensive numerical results are provided. It is noteworthy that, in the context of the GA0(alfa,gama,L) model, we have evaluated the approximation of the null distribution of the signalized likelihood ratio statistic by the standard normal distribution. Additionally, we have obtained distributional results for the Weibull case concerning the maximum likelihood estimators and the likelihood ratio statistic both for noncensored and censored data.
119

MELHORAMENTOS INFERENCIAIS NO MODELO BETA-SKEW-T-EGARCH / INFERENTIAL IMPROVEMENTS OF BETA-SKEW-T-EGARCH MODEL

Muller, Fernanda Maria 25 February 2016 (has links)
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / The Beta-Skew-t-EGARCH model was recently proposed in literature to model the volatility of financial returns. The inferences over the model parameters are based on the maximum likelihood method. The maximum likelihood estimators present good asymptotic properties; however, in finite sample sizes they can be considerably biased. Monte Carlo simulations were used to evaluate the finite sample performance of point estimators. Numerical results indicated that the maximum likelihood estimators of some parameters are biased in sample sizes smaller than 3,000. Thus, bootstrap bias correction procedures were considered to obtain more accurate estimators in small samples. Better quality of forecasts was observed when the model with bias-corrected estimators was considered. In addition, we propose a likelihood ratio test to assist in the selection of the Beta-Skew-t-EGARCH model with one or two volatility components. The numerical evaluation of the two-component test showed distorted null rejection rates in sample sizes smaller than or equal to 1,000. To improve the performance of the proposed test in small samples, the bootstrap-based likelihood ratio test and the bootstrap Bartlett correction were considered. The bootstrap-based test exhibited the closest null rejection rates to the nominal values. The evaluation results of the two-component tests showed their practical usefulness. Finally, an application to the log-returns of the German stock index of the proposed methods was presented. / O modelo Beta-Skew-t-EGARCH foi recentemente proposto para modelar a volatilidade de retornos financeiros. A estimação dos parâmetros do modelo é feita via máxima verossimilhança. Esses estimadores possuem boas propriedades assintóticas, mas em amostras de tamanho finito eles podem ser consideravelmente viesados. Com a finalidade de avaliar as propriedades dos estimadores, em amostras de tamanho finito, realizou-se um estudo de simulações de Monte Carlo. Os resultados numéricos indicam que os estimadores de máxima verossimilhança de alguns parâmetros do modelo são viesados em amostras de tamanho inferior a 3000. Para obter estimadores pontuais mais acurados foram consideradas correções de viés via o método bootstrap. Verificou-se que os estimadores corrigidos apresentaram menor viés relativo percentual. Também foi observada melhor qualidade das previsões quando o modelo com estimadores corrigidos são considerados. Para auxiliar na seleção entre o modelo Beta-Skew-t-EGARCH com um ou dois componentes de volatilidade foi apresentado um teste da razão de verossimilhanças. A avaliação numérica do teste de dois componentes proposto demonstrou taxas de rejeição nula distorcidas em tamanhos amostrais menores ou iguais a 1000. Para melhorar o desempenho do teste foram consideradas a correção bootstrap e a correção de Bartlett bootstrap. Os resultados numéricos indicam a utilidade prática dos testes de dois componentes propostos. O teste bootstrap exibiu taxas de rejeição nula mais próximas dos valores nominais. Ao final do trabalho foi realizada uma aplicação dos testes de dois componentes e do modelo Beta-Skew-t-EGARCH, bem como suas versões corrigidas, a dados do índice de mercado da Alemanha.
120

Neuronal Dissimilarity Indices that Predict Oddball Detection in Behaviour

Vaidhiyan, Nidhin Koshy January 2016 (has links) (PDF)
Our vision is as yet unsurpassed by machines because of the sophisticated representations of objects in our brains. This representation is vastly different from a pixel-based representation used in machine storages. It is this sophisticated representation that enables us to perceive two faces as very different, i.e, they are far apart in the “perceptual space”, even though they are close to each other in their pixel-based representations. Neuroscientists have proposed distances between responses of neurons to the images (as measured in macaque monkeys) as a quantification of the “perceptual distance” between the images. Let us call these neuronal dissimilarity indices of perceptual distances. They have also proposed behavioural experiments to quantify these perceptual distances. Human subjects are asked to identify, as quickly as possible, an oddball image embedded among multiple distractor images. The reciprocal of the search times for identifying the oddball is taken as a measure of perceptual distance between the oddball and the distractor. Let us call such estimates as behavioural dissimilarity indices. In this thesis, we describe a decision-theoretic model for visual search that suggests a connection between these two notions of perceptual distances. In the first part of the thesis, we model visual search as an active sequential hypothesis testing problem. Our analysis suggests an appropriate neuronal dissimilarity index which correlates strongly with the reciprocal of search times. We also consider a number of alternative possibilities such as relative entropy (Kullback-Leibler divergence), the Chernoff entropy and the L1-distance associated with the neuronal firing rate profiles. We then come up with a means to rank the various neuronal dissimilarity indices based on how well they explain the behavioural observations. Our proposed dissimilarity index does better than the other three, followed by relative entropy, then Chernoff entropy and then L1 distance. In the second part of the thesis, we consider a scenario where the subject has to find an oddball image, but without any prior knowledge of the oddball and distractor images. Equivalently, in the neuronal space, the task for the decision maker is to find the image that elicits firing rates different from the others. Here, the decision maker has to “learn” the underlying statistics and then make a decision on the oddball. We model this scenario as one of detecting an odd Poisson point process having a rate different from the common rate of the others. The revised model suggests a new neuronal dissimilarity index. The new dissimilarity index is also strongly correlated with the behavioural data. However, the new dissimilarity index performs worse than the dissimilarity index proposed in the first part on existing behavioural data. The degradation in performance may be attributed to the experimental setup used for the current behavioural tasks, where search tasks associated with a given image pair were sequenced one after another, thereby possibly cueing the subject about the upcoming image pair, and thus violating the assumption of this part on the lack of prior knowledge of the image pairs to the decision maker. In conclusion, the thesis provides a framework for connecting the perceptual distances in the neuronal and the behavioural spaces. Our framework can possibly be used to analyze the connection between the neuronal space and the behavioural space for various other behavioural tasks.

Page generated in 0.0871 seconds