• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 71
  • 27
  • 12
  • 5
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 165
  • 165
  • 33
  • 33
  • 21
  • 21
  • 21
  • 17
  • 17
  • 17
  • 17
  • 17
  • 17
  • 17
  • 15
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
111

Modelo estocástico para estimação da produtividade de soja no Estado de São Paulo utilizando simulação normal bivariada / Sthocastic model to estimate the soybean productivity in the State of São Paulo through bivaried normal simulation

Martin, Thomas Newton 08 February 2007 (has links)
A disponibilidade de recursos, tanto de ordem financeira quanto de mão-de-obra, é escassa. Sendo assim, deve-se incentivar o planejamento regional que minimize a utilização de recursos. A previsão de safra por intermédio de técnicas de modelagem deve ser realizada anteriormente com base nas características regionais, indicando assim as diretrizes básicas da pesquisa, bem como o planejamento regional. Dessa forma, os objetivos deste trabalho são: (i) caracterizar as variáveis do clima por intermédio de diferentes distribuições de probabilidade; (ii) verificar a homogeneidade espacial e temporal para as variáveis do clima; (iii) utilizar a distribuição normal bivariada para simular parâmetros utilizados na estimação de produtividade da cultura de soja; e (iv) propor um modelo para estimar a ordem de magnitude da produtividade potencial (dependente da interação genótipo, temperatura, radiação fotossinteticamente ativa e fotoperíodo) e da produtividade deplecionada (dependente da podutividade potencial, da chuva e do armazenamento de água no solo) de grãos de soja, baseados nos valores diários de temperatura, insolação e chuva, para o estado de São Paulo. As variáveis utilizadas neste estudo foram: temperatura média, insolação, radiação solar fotossinteticamente ativa e precipitação pluvial, em escala diária, obtidas em 27 estações localizadas no Estado de São Paulo e seis estações localizadas em Estados vizinhos. Primeiramente, verificou-se a aderência das variáveis a cinco distribuições de probabilidade (normal, log-normal, exponencial, gama e weibull), por intermédio do teste de Kolmogorov-Smirnov. Verificou-se a homogeneidade espacial e temporal dos dados por intermédio da análise de agrupamento pelo método de Ward e estimou-se o tamanho de amostra (número de anos) para as variáveis. A geração de números aleatórios foi realizada por intermédio do método Monte Carlo. A simulação dos dados de radiação fotossinteticamente ativa e temperatura foram realizadas por intermédio de três casos (i) distribuição triangular assimétrica (ii) distribuição normal truncada a 1,96 desvio padrão da média e (iii) distribuição normal bivariada. Os dados simulados foram avaliados por intermédio do teste de homogeneidade de variância de Bartlett e do teste F, teste t, índice de concordância de Willmott, coeficiente angular da reta, o índice de desempenho de Camargo (C) e aderência à distribuição normal (univariada). O modelo utilizado para calcular a produtividade potencial da cultura de soja foi desenvolvido com base no modelo de De Wit, incluindo contribuições de Van Heenst, Driessen, Konijn, de Vries, dentre outros. O cálculo da produtividade deplecionada foi dependente da evapotranspiração potencial, da cultura e real e coeficiente de sensibilidade a deficiência hídrica. Os dados de precipitação pluvial foram amostrados por intermédio da distribuição normal. Sendo assim, a produção diária de carboidrato foi deplecionada em função do estresse hídrico e número de horas diárias de insolação. A interpolação dos dados, de modo a englobar todo o Estado de São Paulo, foi realizada por intermédio do método da Krigagem. Foi verificado que a maior parte das variáveis segue a distribuição normal de probabilidade. Além disso, as variáveis apresentam variabilidade espacial e temporal e o número de anos necessários (tamanho de amostra) para cada uma delas é bastante variável. A simulação utilizando a distribuição normal bivariada é a mais apropriada por representar melhor as variáveis do clima. E o modelo de estimação das produtividades potencial e deplecionada para a cultura de soja produz resultados coerentes com outros resultados obtidos na literatura. / The availability of resources, as much of financial order and human labor, is scarse. Therefore, it must stimulates the regional planning that minimizes the use of resources. Then, the forecast of harvests through modelling techniques must previously on the basis of be carried through the regional characteristics, thus indicating the routes of the research, as well as the regional planning. Then, the aims of this work are: (i) to characterize the climatic variables through different probability distributions; (ii) to verify the spatial and temporal homogeneity of the climatic variables; (iii) to verify the bivaried normal distribution to simulate parameters used to estimate soybean crop productivity; (iv) to propose a model of estimating the magnitud order of soybean crop potential productivity (it depends on the genotype, air temperature, photosynthetic active radiation; and photoperiod) and the depleted soybean crop productivity (it pedends on the potential productivity, rainfall and soil watter availability) based on daily values of temperature, insolation and rain, for the State of São Paulo. The variable used in this study had been the minimum, maximum and average air temperature, insolation, solar radiation, fotosynthetic active radiation and pluvial precipitation, in daily scale, gotten in 27 stations located in the State of São Paulo and six stations located in neighboring States. First, it was verified tack of seven variables in five probability distributions (normal, log-normal, exponential, gamma and weibull), through of Kolmogorov-Smirnov. The spatial and temporal verified through the analysis of grouping by Ward method and estimating the sample size (number of years) for the variable. The generation of random numbers was carried through the Monte Carlo Method. The simulation of the data of photosyntetic active radiation and temperature had been carried through three cases: (i) nonsymetric triangular distribution (ii) normal distribution truncated at 1.96 shunting line standard of the average and (iii) bivaried normal distribution. The simulated data had been evaluated through the test of homogeneity of variance of Bartlett and the F test, t test, agreement index of Willmott, angular coefficient of the straight line, the index of performance index of Camargo (C) and tack the normal distribution (univarieted). The proposed model to simulate the potential productivity of soybean crop was based on the de Wit concepts, including Van Heenst, Driessen, Konijn, Vries, and others researchers. The computation of the depleted productivity was dependent of the potential, crop and real evapotranspirations and the sensitivity hydric deficiency coefficient. The insolation and pluvial precipitation data had been showed through the normal distribution. Being thus, the daily production of carbohydrate was depleted as function of hydric stress and insolation. The interpolation of the data, in order to consider the whole State of Sao Paulo, was carried through the Kriging method. The results were gotten that most of the variable can follow the normal distribution. Moreover, the variable presents spatial and temporal variability and the number of necessary years (sample size) for each one of them is sufficiently changeable. The simulation using the bivaried normal distribution is most appropriate for better representation of climate variable. The model of estimating potential and depleted soybean crop productivities produces coherent values with the literature results.
112

Impact des critères de jugement sur l’optimisation de la détermination du nombre de sujet nécessaires pour qualifier un bénéfice clinique dans des essais cliniques en cancérologie / Impact of the endpoints on the optimization of the determination of the sample size in oncology clinical trials to qualify a clinical benefit

Pam, Alhousseiny 19 December 2017 (has links)
La survie globale (SG) est considérée comme le critère de jugement principal de référence et le plus pertinent/objectif dans les essais cliniques en oncologie. Cependant, avec l’augmentation du nombre de traitements efficaces disponibles pour une majorité de cancers, un nombre plus important de patients à inclure et un suivi plus long est nécessaire afin d’avoir suffisamment de puissance statistique pour pouvoir mettre en évidence une amélioration de la SG. De ce fait les critères de survie composites, tels que la survie sans progression, sont couramment utilisés dans les essais de phase III comme critère de substitution de la SG. Leur développement est fortement influencé par la nécessité de réduire la durée des essais cliniques, avec une réduction du coût et du nombre de sujets nécessaires. Cependant, ces critères sont souvent mal définis, et leurs définitions sont très variables entre les essais, rendant difficile la comparaison entre les essais. De plus, leur capacité de substitution à la SG, c’est-à-dire la capacité à prédire un bénéfice sur la SG à partir des résultats de l’essai sur le critère d’évaluation, n’a pas toujours été rigoureusement évaluée. Le projet DATECAN-1 a permis de proposer des recommandations pour la définition et donc l’homogénéisation entre essais cliniques randomisés (ECR) de ces critères [1].De plus, la majorité des essais cliniques de phase III intègrent désormais la qualité de vie relative à la santé (QdV) comme critère de jugement afin d’investiguer le bénéfice clinique de nouvelles stratégies thérapeutiques pour le patient. Une alternative serait de considérer un co-critère de jugement principal : un critère tumoral tel que la survie sans progression et la QdV afin de s’assurer du bénéfice clinique pour le patient [2]. Bien que la QdV soit reconnue comme second critère de jugement principal par l’ASCO (American Society of Clinical Oncology) et la FDA (Food and Drug Administration) [3], elle est encore peu prise en compte comme co-critère de jugement principal dans les essais. L’évaluation, l’analyse et l’interprétation des résultats de QdV demeurent complexes, et les résultats restent encore peu pris en compte par les cliniciens du fait de son caractère subjectif et dynamique [4].Lors de la conception d’un essai clinique avec des critères de jugements principaux multiples, il est essentiel de déterminer la taille de l’échantillon appropriée pour pouvoir indiquer la signification statistique de tous les co-critères de jugements principaux tout en préservant la puissance globale puisque l’erreur de type I augmente avec le nombre de co-critères de jugements principaux. Plusieurs méthodes ont été développées pour l’ajustement du taux d’erreur de type I [5]. Elles utilisent généralement un fractionnement du taux d’erreur de type I appliqué à chacune des hypothèses testées. Toutes ces méthodes sont investiguées dans ce projet.Dans cette optique, les objectifs de mon projet de thèse sont :1) D’étudier l’influence des définitions des critères de survie issus du consensus du DATECAN-1 sur les résultats et les conclusions des essais.2) D’étudier les propriétés des critères de substitution à la SG.3) De proposer un design du calcul du nombre de sujets nécessaires pour une étude clinique de phase III avec des co-critères de jugement de type temps jusqu’à événement tels la survie sans progression et la QdV.L’objectif final de mon projet de thèse est, sur la base de publications, de développer un package R pour le calcul du nombre de sujets nécessaires avec les co-critères de jugement principal et d’étudier les critères de substitutions à la SG pour le cancer du pancréas. / In oncology clinical trial, overall survival (OS) benefit is the gold standard for the approval of new anticancer by the regulatory agencies as the FDA. The need to reduce long-term follow-up, sample size and cost of clinical trials led to use some intermediate endpoint for OS to assess earlier efficacy of treatments. The intermediate endpoints are often used as primary endpoints because they can be assessed earlier and most of the time, these endpoints are composite endpoints because they combine different events. Nevertheless, composite endpoints suffer from important limitations specially the variability of their definitions, which is recognized as a major methodological problem. In this context, the DATECAN-1 project has been developed to provide recommendations and to standardize definitions of time-to-event composite endpoints for each specific diseases and at each specific stage by use of a formal consensus methodology. To validate surrogate endpoints, Buyse and colleagues developed a method based on individual-data meta-analysis, which assesses “individual-level” surrogacy and “trial-level” surrogacy, which is considered as the gold standard.Phase III cancer clinical trials investigators employ more and more two co-primary endpoints. However, an important challenge, in the conception of clinical trials is the sample size calculation according to the main objective(s) and the ability to manage multiple co-primary endpoints. The determination of sample size is fundamental and critical elements in the design of phase III clinical trial. If the sample size is too small, important effects may be go unnoticed. If it is too large, it represents a waste of resources and unethically puts more participants at risk than necessary. The statistical power depends on the total number of events rather than on total sample size.The objectives of my thesis project are:1) To study the impact of the definitions of time-to-event endpoint from the DATECAN-1 consensus on the results and conclusions of the trials published in pancreatic cancer.2) To study the properties of the potential surrogate to the overall survival.3) To propose a design for the determination of sample size necessary for a phase III clinical study with co-primary time-to-event such as progression-free survival and time to quality of life deterioration.The final objective of my thesis project is to develop an R package for the calculation of the number of subjects needed with the co-primary time-to-event in phase III clinical trials.
113

Aspectos estatísticos da amostragem de água de lastro / Statistical aspects of ballast water sampling

Costa, Eliardo Guimarães da 01 March 2013 (has links)
A água de lastro de navios é um dos principais agentes dispersivos de organismos nocivos à saúde humana e ao meio ambiente e normas internacionais exigem que a concentração desses organismos no tanque seja menor que um valor previamente especificado. Por limitações de tempo e custo, esse controle requer o uso de amostragem. Sob a hipótese de que a concentração desses organismos no tanque é homogênea, vários autores têm utilizado a distribuição Poisson para a tomada de decisão com base num teste de hipóteses. Como essa proposta é pouco realista, estendemos os resultados para casos em que a concentração de organismos no tanque é heterogênea utilizando estratificação, processos de Poisson não-homogêneos ou assumindo que ela obedece a uma distribuição Gama, que induz uma distribuição Binomial Negativa para o número de organismos amostrados. Além disso, propomos uma nova abordagem para o problema por meio de técnicas de estimação baseadas na distribuição Binomial Negativa. Para fins de aplicação, implementamos rotinas computacionais no software R / Ballast water is a leading dispersing agent of harmful organisms to human health and to the environment and international standards require that the concentration of these organisms in the tank must be less than a prespecified value. Because of time and cost limitations, this inspection requires the use of sampling. Under the assumption of an homogeneous organism concentration in the tank, several authors have used the Poisson distribution for decision making based on hypothesis testing. Since this proposal is unrealistic, we extend the results for cases in which the organism concentration in the tank is heterogeneous, using stratification, nonhomogeneous Poisson processes or assuming that it follows a Gamma distribution, which induces a Negative Binomial distribution for the number of sampled organisms. Furthermore, we propose a novel approach to the problem through estimation techniques based on the Negative Binomial distribution. For practical applications, we implemented computational routines using the R software
114

Gráficos de controle com tamanho de amostra variável : classificando sua estratégia conforme sua destinação por intermédio de um estudo bibliométrico /

Caltabiano, Ana Maria de Paula January 2018 (has links)
Orientador: Antonio Fernando Branco Costa / Resumo: Os gráficos de controle foram criados por Shewhart em torno de 1924. Desde então foram propostas muitas estratégias para melhorar o desempenho de tais ferramentas estatísticas. Dentre elas, destaca-se a estratégia dos parâmetros adaptativos, que deu origem a uma linha de pesquisa bastante fértil. Uma de suas vertentes está voltada ao gráfico de tamanho da amostra variável, que depende da posição do ponto amostral atual. Se ele está perto da linha central, a próxima amostra será pequena. Se ele está distante, mas ainda não na região de ação, a próxima amostra será grande. Este esquema de amostragem com tamanho de amostra variável se tornou conhecido com esquema VSS (variable sample size). Esta dissertação revisa os trabalhos da área de monitoramento de processos que tem como foco principal os esquemas VSS de amostragem. Foi feita uma revisão sistemática da literatura, por intermédio de uma análise bibliométrica do período de 1980 a 2018 com o objetivo de classificar a estratégia VSS, segundo sua destinação, por exemplo, os gráficos de com parâmetros conhecidos e observação independente. As destinações foram divididas em dez classes: I – tipo de VSS ; II – tipo de monitoramento; III – número de variáveis sob monitoramento; IV – tipo de gráfico; V – parâmetros do processo; VI – regras de sinalização; VII – natureza do processo; VIII – tipo de otimização; IX – modelo matemático das propriedades do gráfico; X – tipo de produção. A conclusão principal deste estudo foi que nas class... (Resumo completo, clicar acesso eletrônico abaixo) / Mestre
115

Abordagem não-paramétrica para cálculo do tamanho da amostra com base em questionários ou escalas de avaliação na área de saúde / Non-parametric approach for calculation of sample size based on questionnaires or scales of assessment in the health care

Euro de Barros Couto Junior 01 October 2009 (has links)
Este texto sugere sobre como calcular um tamanho de amostra com base no uso de um instrumento de coleta de dados formado por itens categóricos. Os argumentos para esta sugestão estão embasados nas teorias da Combinatória e da Paraconsistência. O propósito é sugerir um procedimento de cálculo simples e prático para obter um tamanho de amostra aceitável para coletar informações, organizá-las e analisar dados de uma aplicação de um instrumento de coleta de dados médicos baseado, exclusivamente, em itens discretos (itens categóricos), ou seja, cada item do instrumento é considerado como uma variável não-paramétrica com um número finito de categorias. Na Área de Saúde, é muito comum usar instrumentos para levantamento com base nesse tipo de itens: protocolos clínicos, registros hospitalares, questionários, escalas e outras ferramentas para inquirição consideram uma sequência organizada de itens categóricos. Uma fórmula para o cálculo do tamanho da amostra foi proposta para tamanhos de população desconhecidos e um ajuste dessa fórmula foi proposto para populações de tamanho conhecido. Pôde-se verificar, com exemplos práticos, a possibilidade de uso de ambas as fórmulas, o que permitiu considerar a praticidade de uso nos casos em que se tem disponível pouca ou nenhuma informação sobre a população de onde a amostra será coletada. / This text suggests how to calculate a sample size based on the use of a data collection instrument consisting of categorical items. The arguments for this suggestion are based on theories of Combinatorics and Paraconsistency. The purpose is to suggest a practical and simple calculation procedure to obtain an acceptable sample size to collect information, organize it and analyze data from an application of an instrument for collecting medical data, based exclusively on discrete items (categorical items), i.e., each item of the instrument is considered as a non-parametric variable with finite number of categories. In the health care it is very common to use survey instruments on the basis of such items: clinical protocols, hospital registers, questionnaires, scales and other tools for hearing consider a sequence of items organized categorically. A formula for calculating the sample size was proposed for a population of unknown size, and an adjusted formula has been proposed for population of known size. It was seen, with practical examples, the possibility of using both formulas, allowing to consider the practicality of the use in cases that have little or no information available about the population from which the sample is collected
116

藥劑生體可用及相等性在兩個單尾檢定下樣本數之研究 / Sample Size Determination for the Two One-Sided Tests Procedure in Bioavailability/Bioequivalence

吳嘉翰, Wu, Chia-Han Unknown Date (has links)
藥劑生體可用及相等試驗,對於藥品的研發佔有非常重要之地位.如何在各種交叉實驗設計中選取適當的樣本,以達到我們要求的檢測能力,是本文的主要目的.Liu and chow(1992a)根據Schuirmann(1987)的區間假設檢定以正負20決策準則,針對二乘二交叉實驗設計,提出了一個簡易的樣本數計算方法.本文將對高階交叉實驗設計之樣本數計算方法,做進一步的研究.
117

Effective GPS-based panel survey sample size for urban travel behavior studies

Xu, Yanzhi 05 April 2010 (has links)
This research develops a framework to estimate the effective sample size of Global Positioning System (GPS) based panel surveys in urban travel behavior studies for a variety of planning purposes. Recent advances in GPS monitoring technologies have made it possible to implement panel surveys with lengths of weeks, months or even years. The many advantageous features of GPS-based panel surveys make such surveys attractive for travel behavior studies, but the higher cost of such surveys compared to conventional one-day or two-day paper diary surveys requires scrutiny at the sample size planning stage to ensure cost-effectiveness. The sample size analysis in this dissertation focuses on three major aspects in travel behavior studies: 1) to obtain reliable means for key travel behavior variables, 2) to conduct regression analysis on key travel behavior variables against explanatory variables such as demographic characteristics and seasonal factors, and 3) to examine impacts of a policy measure on travel behavior through before-and-after studies. The sample size analyses in this dissertation are based on the GPS data collected in the multi-year Commute Atlanta study. The sample size analysis with regard to obtaining reliable means for key travel behavior variables utilizes Monte Carlo re-sampling techniques to assess the trend of means against various sample size and survey length combinations. The basis for the framework and methods of sample size estimation related to regression analysis and before-and-after studies are derived from various sample size procedures based on the generalized estimating equation (GEE) method. These sample size procedures have been proposed for longitudinal studies in biomedical research. This dissertation adapts these procedures to the design of panel surveys for urban travel behavior studies with the information made available from the Commute Atlanta study. The findings from this research indicate that the required sample sizes should be much larger than the sample sizes in existing GPS-based panel surveys. This research recommends a desired range of sample sizes based on the objectives and survey lengths of urban travel behavior studies.
118

Some properties of measures of disagreement and disorder in paired ordinal data

Högberg, Hans January 2010 (has links)
The measures studied in this thesis were a measure of disorder, D, and a measure of the individual part of the disagreement, the measure of relative rank variance, RV, proposed by Svensson in 1993. The measure of disorder is a useful measure of order consistency in paired assessments of scales with a different number of possible values. The measure of relative rank variance is a useful measure in evaluating reliability and for evaluating change in qualitative outcome variables. In Paper I an overview of methods used in the analysis of dependent ordinal data and a comparison of the methods regarding the assumptions, specifications, applicability, and implications for use were made. In Paper II an application, and a comparison of the results of some standard models, tests, and measures to two different research problems were made. The sampling distribution of the measure of disorder was studied both analytically and by a simulation experiment in Paper III. The asymptotic normal distribution was shown by the theory of U-statistics and the simulation experiments for finite sample sizes and various amount of disorder showed that the sampling distribution was approximately normal for sample sizes of about 40 to 60 for moderate sizes of D and for smaller sample sizes for substantial sizes of D. The sampling distribution of the relative rank variance was studied in a simulation experiment in Paper IV. The simulation experiment showed that the sampling distribution was approximately normal for sample sizes of 60-100 for moderate size of RV, and for smaller sample sizes for substantial size of RV. In Paper V a procedure for inference regarding relative rank variances from two or more samples was proposed. Pair-wise comparison by jackknife technique for variance estimation and the use of normal distribution as approximation in inference for parameters in independent samples based on the results in Paper IV were demonstrated. Moreover, an application of Kruskal-Wallis test for independent samples and Friedman’s test for dependent samples were conducted. / Statistical methods for ordinal data
119

Design, maintenance and methodology for analysing longitudinal social surveys, including applications

Domrow, Nathan Craig January 2007 (has links)
This thesis describes the design, maintenance and statistical analysis involved in undertaking a Longitudinal Survey. A longitudinal survey (or study) obtains observations or responses from individuals over several times over a defined period. This enables the direct study of changes in an individual's response over time. In particular, it distinguishes an individual's change over time from the baseline differences among individuals within the initial panel (or cohort). This is not possible in a cross-sectional study. As such, longitudinal surveys give correlated responses within individuals. Longitudinal studies therefore require different considerations for sample design and selection and analysis from standard cross-sectional studies. This thesis looks at the methodology for analysing social surveys. Most social surveys comprise of variables described as categorical variables. This thesis outlines the process of sample design and selection, interviewing and analysis for a longitudinal study. Emphasis is given to categorical response data typical of a survey. Included in this thesis are examples relating to the Goodna Longitudinal Survey and the Longitudinal Survey of Immigrants to Australia (LSIA). Analysis in this thesis also utilises data collected from these surveys. The Goodna Longitudinal Survey was conducted by the Queensland Office of Economic and Statistical Research (a portfolio office within Queensland Treasury) and began in 2002. It ran for two years whereby two waves of responses were collected.
120

The Analysis of Big Data on Cites and Regions - Some Computational and Statistical Challenges

Schintler, Laurie A., Fischer, Manfred M. 28 October 2018 (has links) (PDF)
Big Data on cities and regions bring new opportunities and challenges to data analysts and city planners. On the one side, they hold great promise to combine increasingly detailed data for each citizen with critical infrastructures to plan, govern and manage cities and regions, improve their sustainability, optimize processes and maximize the provision of public and private services. On the other side, the massive sample size and high-dimensionality of Big Data and their geo-temporal character introduce unique computational and statistical challenges. This chapter provides overviews on the salient characteristics of Big Data and how these features impact on paradigm change of data management and analysis, and also on the computing environment. / Series: Working Papers in Regional Science

Page generated in 0.0858 seconds