• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 551
  • 94
  • 78
  • 58
  • 36
  • 25
  • 25
  • 25
  • 25
  • 25
  • 24
  • 22
  • 15
  • 4
  • 3
  • Tagged with
  • 956
  • 956
  • 221
  • 163
  • 139
  • 126
  • 97
  • 92
  • 90
  • 74
  • 72
  • 69
  • 66
  • 65
  • 64
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
691

Desenvolvimento de aplicativo para o método de discriminação de Fisher e seu uso na experimentação agronômica / Carlos Roberto Pereira Padovani. -

Padovani, Carlos Roberto Pereira, 1975- January 2004 (has links)
Orientador: Flávio Ferrari Aragon / Banca: Adalberto José Crocci / Banca: João Paulo Borim / Resumo: Nas Ciências Agronômicas, em particular na Energia na Agricultura, encontram-se várias situações em que são observadas diversas variáveis respostas nas parcelas ou unidades experimentais. Nestas situações, um caso de interesse prático à experimentação agronômica é o que considera o estudo das regiões de similaridade entre as parcelas com atenção especial à classificação de novas unidades experimentais. Um procedimento bastante robusto para o estudo das similaridades em situações multivariadas consiste no método de discriminação de Fisher entre várias populações. Na literatura Agronômica, pouco se tem encontrado sobre o uso deste procedimento. Entende-se que este fato deve estar relacionado aos procedimentos algébricos e matriciais requeridos na construção do modelo matemático gerador das regiões e, principalmente, pela falta de um programa computacional de fácil manuseio para pesquisadores de áreas aplicadas. Neste sentido, desenvolveu-se um programa computacional para o método de Fisher, acessível e de fácil manuseio para pesquisadores de áreas aplicadas, complementado com a construção do manual do usuário. Para exemplificar o uso do programa, consideraram-se dados relativos a experimentos desenvolvidos na EMBRAPA, região de Londrina - PR, envolvendo seis diferentes variedades de girassol (Helianthus annuus) e cinco caracteres quantitativos da planta. O uso da discriminação de Fisher possibilitou a construção gráfica das regiões de classificação, segundo as diversidades genéticas do girassol, o que apresentou uma alta porcentagem de retenção de informação da variabilidade associada a uma baixa taxa de classificação errônea. / Abstract: In the Agronomical Sciences, particularly in the Energy in the Agriculture, there are several situations in which many answer variables in the experimental parcels or units can be observed. In these situations, a case of practical interest to the agronomical is the study of the region similarities among the parcels, giving special attention to the classification of new experimental units. A very robust procedure for the discrimination of several multivariate populations is the Fisher's graphic method. In the Agronomical literature, not much is found about the use of this procedure. This fact may be related to algebric and matricial procedures required in the construction of the mathematical model generator of the regions and, mostly, by the lack of a friendly computational software for researchers of applied areas. In this sense, the objective is to develop a computational software for Fisher's Method, which must be accessible and of easy handling for researchers of applied areas, complementing it with the creation of the user's manual and presenting applications for the software in the rational use of energy. Datas related to experiments developed at EMBRAPA, Londrina region PR, are considered, envolving six different sunflower variables (Helianthus annuus) and five quantitative marks of the plant. Fisherþs discrimination use enable the graphic construction of the classification regions, according to the sunflwer genetic diversities, whitch presented a high percentage of information retention of the varibility associated to a low rate of erroneous classification. / Mestre
692

Streamflow extremes and climate variability in Southeastern United States

Unknown Date (has links)
Trends in streamflow extremes at a regional scale linked to the possible influences of four major oceanic-atmospheric oscillations are analyzed in this study. Oscillations considered include: El Niño Southern Oscillation (ENSO), Pacific Decadal Oscillation (PDO), Atlantic Multidecadal Oscillation (AMO), and North Atlantic Oscillation (NAO). The main emphasis is low flows in the South-Atlantic Gulf region of the United States. Several standard drought indices of low flow extremes during two different phases (warm/positive and cool/negative) of these oscillations are evaluated. Long-term streamflow data at 43 USGS sites in the region from the Hydro-Climatic Data Network that are least affected by anthropogenic influences are used for analysis. Results show that for ENSO, low flow indices were more likely to occur during La Niña phase; however, longer deficits were more likely during El Niño phase. Results also show that for PDO (AMO), all (most) low flow indices occur during the cool (warm) phase. / Includes bibliography. / Thesis (M.S.)--Florida Atlantic University, 2015. / FAU Electronic Theses and Dissertations Collection
693

Power Laws na modelagem de caches de microprocessadores. / Power Laws on the modeling of caches of microprocessors.

Scoton, Filipe Montefusco 10 June 2011 (has links)
Power Laws são leis estatísticas que permeiam os mais variados campos do conhecimento humano tais como Biologia, Sociologia, Geografia, Linguística, Astronomia, entre outros, e que têm como característica mais importante a disparidade entre os elementos causadores, ou seja, alguns poucos elementos são responsáveis pela grande maioria dos efeitos. Exemplos famosos são o Princípio de Pareto, a Lei de Zipf e o modelo de Incêndios Florestais. O Princípio de Pareto diz que 80% da riqueza de uma nação está nas mãos de apenas 20% da população; em outras palavras, uma relação causa e efeito chamada 80-20. A Lei de Zipf enuncia que o comportamento da frequência versus o ranking de ocorrência é dado por uma curva hiperbólica com um comportamento semelhante a 1/x. O modelo de Incêndios Florestais modela o comportamento do crescimento de árvores em uma floresta entre sucessivas queimadas que causam destruição de agrupamentos de árvores. As Power Laws demonstram que uma porcentagem pequena de uma distribuição tem uma alta frequência de ocorrência, enquanto o restante dos casos que aparecem tem uma frequência baixa, o que levaria a uma reta decrescente em uma escala logarítmica. A partir de simulações utilizando o conjunto de benchmarks SPEC-CPU2000, este estudo procura investigar como essas leis estatísticas podem ser utilizadas para entender e melhorar o desempenho de caches baseados em diferentes políticas de substituição de linhas de cache. O estudo sobre a possibilidade de uma nova política de substituição composta por um cache Pareto, bem como um novo mecanismo de chaveamento do comportamento de algoritmos adaptativos de substituição de linhas de cache, chamado de Forest Fire Switching Mechanism, ambos baseados em Power Laws, são propostos a fim de se obter ganhos de desempenho na execução de aplicações. / Power Laws are statistical laws that permeate the most varied fields of human knowledge such as Biology, Sociology, Geography, Linguistics, Astronomy, among others, and have as most important characteristic the disparity between the cause events, in other words, some few elements are responsible for most of the effects. Famous examples are the Pareto Principle, the Zipfs Law and the Forest Fire model. The Pareto Principle says that 80% of a nations wealth is in the hands of just 20% of the population; in other words, a cause and effect relationship called 80-20. Zipf\'s Law states that the behavior of frequency versus ranking of occurrence is given by a hyperbolic curve with a behavior similar to 1/x. The Forest Fire model represents the behavior of trees growing in a forest between successive fires that cause the destruction of clusters of trees. The Power Laws demonstrate that a small percentage of a distribution has a high frequency of occurrence, while the rest of the cases that appear have a low frequency, which would take to a decreasing line in a logarithmic scale. Based on simulations using the SPEC-CPU2000 benchmarks, this work seeks to investigate how these distributions can be used in order to understand and improve the performance of caches based on different cache line replacement policies. The study about the possibility of a new replacement policy composed by a Pareto cache, and a new switching mechanism of the behavior of cache line replacement adaptive algorithms, called Forest Fire Switching Mechanism, both based on Power Laws, are proposed in order to obtain performance gains on the execution of applications.
694

Bioinformatics-inspired binary image correlation: application to bio-/medical-images, microsarrays, finger-prints and signature classifications

Unknown Date (has links)
The efforts addressed in this thesis refer to assaying the extent of local features in 2D-images for the purpose of recognition and classification. It is based on comparing a test-image against a template in binary format. It is a bioinformatics-inspired approach pursued and presented as deliverables of this thesis as summarized below: 1. By applying the so-called 'Smith-Waterman (SW) local alignment' and 'Needleman-Wunsch (NW) global alignment' approaches of bioinformatics, a test 2D-image in binary format is compared against a reference image so as to recognize the differential features that reside locally in the images being compared 2. SW and NW algorithms based binary comparison involves conversion of one-dimensional sequence alignment procedure (indicated traditionally for molecular sequence comparison adopted in bioinformatics) to 2D-image matrix 3. Relevant algorithms specific to computations are implemented as MatLabTM codes 4. Test-images considered are: Real-world bio-/medical-images, synthetic images, microarrays, biometric finger prints (thumb-impressions) and handwritten signatures. Based on the results, conclusions are enumerated and inferences are made with directions for future studies. / by Deepti Pappusetty. / Thesis (M.S.C.S.)--Florida Atlantic University, 2011. / Includes bibliography. / Electronic reproduction. Boca Raton, Fla., 2011. Mode of access: World Wide Web.
695

ANÁLISE DA CADEIA PRODUTIVA DA SOJA NO ESTADO DE GOIÁS COM ÊNFASE NAS OPERAÇÕES LOGÍSTICAS.

Silva, Adrielle Marques Mendes da 12 March 2015 (has links)
Made available in DSpace on 2016-08-10T10:40:25Z (GMT). No. of bitstreams: 1 ADRIELLE MARQUES MENDES DA SILVA.pdf: 2082243 bytes, checksum: 6f7d9688881a35a5d6fa46dcaefbdd58 (MD5) Previous issue date: 2015-03-12 / This paper is about the analysis of the soybean production chain in the state of Goias, with emphasis on logistics operations. Was made using descriptive statistics and analysis methods. Through the construction and observation State of Goias maps, highlighting the soybean production and logistics operation means. The data for the preparation of the study were collected from the governmental and nongovernmental agencies. Then underwent statistical methods through simple linear regression and multivariate statistics for modeling chain. Through analysis both descriptive, as statistics, we found production growth over the years, the productivity and the increase in planted areas. Growth analysis was made of the capacity of the stores static over the years. With the analysis results, it was possible made out the forecast production chain behavior and its relationship with the logistics operations for years to come. Came to the conclusion that soybean production growth rate is higher than the growth of the static capacity of the warehouses. Soy is transported primarily through road transport, rail transport is not used for soy transport, which increases transportation costs. The job then becomes suggestion for future investments in logistics infrastructure in the State of Goias, as the forecasts made by the analysis justifying the need for investment to reduce logistics costs the soy production chain. / Este trabalho trata da análise da cadeia produtiva de soja no estado de Goiás, com ênfase nas operações logísticas. Fez-se o uso de métodos descritivos e estatísticos para as análises. Por meio da construção e observação de mapas do Estado de Goiás, destacando a produção de soja e os meios de operação logística. Os dados para elaboração do trabalho foram levantados junto aos órgãos governamentais e não governamentais. Em seguida, foram submetidos a métodos estatísticos através de análises de regressão linear simples e estatística multivariada para a modelagem da cadeia. Através das análises, tanto descritiva, quanto estatística, foi possível verificar o crescimento da produção ao longo dos anos, a produtividade e o aumento das áreas plantadas. Foi feita a análise do crescimento da capacidade estática dos armazéns ao longo dos anos. Através dos resultados das análises, foi possível realizar a previsão do comportamento da cadeia produtiva e sua relação com as operações logísticas para os próximos anos. Chegou-se a conclusão de que a taxa de crescimento da produção de soja é superior ao crescimento da capacidade estática dos armazéns. A soja é transportada basicamente por meio de transporte rodoviário, o modal ferroviário não é utilizado para transporte de soja, o que eleva os custos com transporte. O trabalho então torna- se sugestão para investimentos futuros em infraestrutura logística no Estado de Goiás, visto que as previsões estabelecidas pelas análises justificam a necessidade de investimentos para a redução dos custos logísticos da cadeia produtiva da soja.
696

Power Laws na modelagem de caches de microprocessadores. / Power Laws on the modeling of caches of microprocessors.

Filipe Montefusco Scoton 10 June 2011 (has links)
Power Laws são leis estatísticas que permeiam os mais variados campos do conhecimento humano tais como Biologia, Sociologia, Geografia, Linguística, Astronomia, entre outros, e que têm como característica mais importante a disparidade entre os elementos causadores, ou seja, alguns poucos elementos são responsáveis pela grande maioria dos efeitos. Exemplos famosos são o Princípio de Pareto, a Lei de Zipf e o modelo de Incêndios Florestais. O Princípio de Pareto diz que 80% da riqueza de uma nação está nas mãos de apenas 20% da população; em outras palavras, uma relação causa e efeito chamada 80-20. A Lei de Zipf enuncia que o comportamento da frequência versus o ranking de ocorrência é dado por uma curva hiperbólica com um comportamento semelhante a 1/x. O modelo de Incêndios Florestais modela o comportamento do crescimento de árvores em uma floresta entre sucessivas queimadas que causam destruição de agrupamentos de árvores. As Power Laws demonstram que uma porcentagem pequena de uma distribuição tem uma alta frequência de ocorrência, enquanto o restante dos casos que aparecem tem uma frequência baixa, o que levaria a uma reta decrescente em uma escala logarítmica. A partir de simulações utilizando o conjunto de benchmarks SPEC-CPU2000, este estudo procura investigar como essas leis estatísticas podem ser utilizadas para entender e melhorar o desempenho de caches baseados em diferentes políticas de substituição de linhas de cache. O estudo sobre a possibilidade de uma nova política de substituição composta por um cache Pareto, bem como um novo mecanismo de chaveamento do comportamento de algoritmos adaptativos de substituição de linhas de cache, chamado de Forest Fire Switching Mechanism, ambos baseados em Power Laws, são propostos a fim de se obter ganhos de desempenho na execução de aplicações. / Power Laws are statistical laws that permeate the most varied fields of human knowledge such as Biology, Sociology, Geography, Linguistics, Astronomy, among others, and have as most important characteristic the disparity between the cause events, in other words, some few elements are responsible for most of the effects. Famous examples are the Pareto Principle, the Zipfs Law and the Forest Fire model. The Pareto Principle says that 80% of a nations wealth is in the hands of just 20% of the population; in other words, a cause and effect relationship called 80-20. Zipf\'s Law states that the behavior of frequency versus ranking of occurrence is given by a hyperbolic curve with a behavior similar to 1/x. The Forest Fire model represents the behavior of trees growing in a forest between successive fires that cause the destruction of clusters of trees. The Power Laws demonstrate that a small percentage of a distribution has a high frequency of occurrence, while the rest of the cases that appear have a low frequency, which would take to a decreasing line in a logarithmic scale. Based on simulations using the SPEC-CPU2000 benchmarks, this work seeks to investigate how these distributions can be used in order to understand and improve the performance of caches based on different cache line replacement policies. The study about the possibility of a new replacement policy composed by a Pareto cache, and a new switching mechanism of the behavior of cache line replacement adaptive algorithms, called Forest Fire Switching Mechanism, both based on Power Laws, are proposed in order to obtain performance gains on the execution of applications.
697

Bootstrap distribution for testing a change in the cox proportional hazard model.

January 2000 (has links)
Lam Yuk Fai. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2000. / Includes bibliographical references (leaves 41-43). / Abstracts in English and Chinese. / Chapter 1 --- Basic Concepts --- p.9 / Chapter 1.1 --- Survival data --- p.9 / Chapter 1.1.1 --- An example --- p.9 / Chapter 1.2 --- Some important functions --- p.11 / Chapter 1.2.1 --- Survival function --- p.12 / Chapter 1.2.2 --- Hazard function --- p.12 / Chapter 1.3 --- Cox Proportional Hazards Model --- p.13 / Chapter 1.3.1 --- A special case --- p.14 / Chapter 1.3.2 --- An example (continued) --- p.15 / Chapter 1.4 --- Extension of the Cox Proportional Hazards Model --- p.16 / Chapter 1.5 --- Bootstrap --- p.17 / Chapter 2 --- A New Method --- p.19 / Chapter 2.1 --- Introduction --- p.19 / Chapter 2.2 --- Definition of the test --- p.20 / Chapter 2.2.1 --- Our test statistic --- p.20 / Chapter 2.2.2 --- The alternative test statistic I --- p.22 / Chapter 2.2.3 --- The alternative test statistic II --- p.23 / Chapter 2.3 --- Variations of the test --- p.24 / Chapter 2.3.1 --- Restricted test --- p.24 / Chapter 2.3.2 --- Adjusting for other covariates --- p.26 / Chapter 2.4 --- Apply with bootstrap --- p.28 / Chapter 2.5 --- Examples --- p.29 / Chapter 2.5.1 --- Male mice data --- p.34 / Chapter 2.5.2 --- Stanford heart transplant data --- p.34 / Chapter 2.5.3 --- CGD data --- p.34 / Chapter 3 --- Large Sample Properties and Discussions --- p.35 / Chapter 3.1 --- Large sample properties and relationship to goodness of fit test --- p.35 / Chapter 3.1.1 --- Large sample properties of A and Ap --- p.35 / Chapter 3.1.2 --- Large sample properties of Ac and A --- p.36 / Chapter 3.2 --- Discussions --- p.37
698

Bayesian analysis of multinomial regression with gamma utilities. / CUHK electronic theses & dissertations collection

January 2012 (has links)
多項式回歸模型可用來模擬賽馬過程。不同研究者對模型中馬匹的效用的分佈採取不同的假設,包括指數分佈,它與Harville 模型(Harville, 1973)相同,伽馬分佈(Stern, 1990)和正態分佈(Henery, 1981)。Harville 模型無法模擬賽馬過程中競爭第二位和第三位等非冠軍位置時增加的隨機性(Benter, 1994)。Stern 模型假設效用服從形狀參數大於一的伽馬分佈,Henery 模型假設效用服從正態分佈。Bacon-Shone,Lo 和 Busche(1992),Lo 和 Bacon-Shone(1994)和 Lo(1994)研究證明了相較於Harville 模型,這兩個模型能更好地模擬賽馬過程。本文利用賽馬歷史數據,採用貝葉斯方法對賽馬結果中馬匹勝出的概率進行預測。本文假設效用服從伽馬分佈。本文針對多項式回歸模型,提出一個在Metropolis-Hastings 抽樣方法中選擇提議分佈的簡便方法。此方法由Scott(2008)首次提出。我們在似然函數中加入服從伽馬分佈的效用作為潛變量。通過將服從伽馬分佈的效用變換成一個服從Mihram(1975)所描述的廣義極值分佈的隨機變量,我們得到一個線性回歸模型。由此線性模型我們可得到最小二乘估計,本文亦討論最小二乘估計的漸進抽樣分佈。我們利用此估計的方差得到Metropolis-Hastings 抽樣方法中的提議分佈。最後,我們可以得到回歸參數的後驗分佈樣本。本文用香港賽馬數據做模擬賽馬投資以檢驗本文提出的估計方法。 / In multinomial regression of racetrack betting, dierent distributions of utilities have been proposed: exponential distribution which is equivalent to Harville’s model (Harville, 1973), gamma distribution (Stern, 1990) and normal distribution (Henery, 1981). Harville’s model has the drawback that it ignores the increasing randomness of the competitions for the second and third place (Benter, 1994). The Stern’s model using gamma utilities with shape parameter greater than 1 and the Henery’s model using normal utilities have been shown to produce a better t (Bacon-Shone, Lo and Busche, 1992; Lo and Bacon-Shone, 1994; Lo, 1994). In this thesis, we use the Bayesian methodology to provide prediction on the winning probabilities of horses with the historical observed data. The gamma utility is adopted throughout the thesis. In this thesis, a convenient method of selecting Metropolis-Hastings proposal distributions for multinomial models is developed. A similar method is rst exploited by Scott (2008). We augment the gamma distributed utilities in the likelihood as latent variables. The gamma utility is transformed to a variable that follows generalized extreme value distribution described by Mihram (1975) through which we get a linear regression model. Least squares estimate of the parameters is easily obtained from this linear model. The asymptotic sampling distribution of the least squares estimate is discussed. The Metropolis-Hastings proposal distribution is generated conditioning on the variance of the estimator. Finally, samples from the posterior distribution of regression parameters are obtained. The proposed method is tested through betting simulations using data from Hong Kong horse racing market. / Detailed summary in vernacular field only. / Xu, Wenjun. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2012. / Includes bibliographical references (leaves 46-48). / Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Abstracts also in Chinese. / Chapter 1 --- Introduction --- p.1 / Chapter 2 --- Hong Kong Horse Racing Market and Models in Horse Racing --- p.4 / Chapter 2.1 --- Hong Kong Horse Racing Market --- p.4 / Chapter 2.2 --- Models in Horse Racing --- p.6 / Chapter 3 --- Metropolis-Hastings Algorithm in Multinomial Regression with Gamma Utilities --- p.10 / Chapter 3.1 --- Notations and Posterior Distribution --- p.10 / Chapter 3.2 --- Metropolis-Hastings Algorithm --- p.11 / Chapter 4 --- Application --- p.15 / Chapter 4.1 --- Variables --- p.16 / Chapter 4.2 --- Markov Chain Simulation --- p.17 / Chapter 4.3 --- Model Selection --- p.27 / Chapter 4.4 --- Estimation Result --- p.31 / Chapter 4.5 --- Betting Strategies and Comparisons --- p.33 / Chapter 5 --- Conclusion --- p.41 / Appendix A --- p.43 / Appendix B --- p.44 / Bibliography --- p.46
699

New perspectives on learning, inference, and control in brains and machines

Merel, Joshua Scott January 2016 (has links)
The work presented in this thesis provides new perspectives and approaches for problems that arise in the analysis of neural data. Particular emphasis is placed on parameter fitting and automated analysis problems that would arise naturally in closed-loop experiments. Part one focuses on two brain-computer interface problems. First, we provide a framework for understanding co-adaptation, the setting in which decoder updating and user learning occur simultaneously. We also provide a new perspective on intention-based parameter fitting and tools to extend this approach to higher dimensional decoders. Part two focuses on event inference, which refers to the decomposition of observed timeseries data into interpretable events. We present application of event inference methods on voltage-clamp recordings as well as calcium imaging, and describe extensions to allow for combining data across modalities or trials.
700

Prognostic Modeling in the Presence of Competing Risks: an Application to Cardiovascular and Cancer Mortality in Breast Cancer Survivors

Leoce, Nicole Marie January 2016 (has links)
Currently, there are an estimated 2.8 million breast cancer survivors in the United States. Due to modern screening practices and raised awareness, the majority of these cases will be diagnosed in the early stages of disease where highly effective treatment options are available, leading a large proportion of these patients to fail from causes other than breast cancer. The primary cause of death in the United States today is cardiovascular disease, which can be delayed or prevented with interventions such as lifestyle modifications or medications. In order to identify individuals who may be at high risk for a cardiovascular event or cardiovascular mortality, a number of prognostic models have been developed. The majority of these models were developed on populations free of comorbid conditions, utilizing statistical methods that did not account for the competing risks of death from other causes, therefore it is unclear whether they will be generalizable to a cancer population remaining at an increased risk of death from cancer and other causes. Consequently, the purpose of this work is multi-fold. We will first summarize the major statistical methods available for analyzing competing risk data and include a simulation study comparing them. This will be used to inform the interpretation of the real data analysis, which will be conducted on a large, contemporary cohort of breast cancer survivors. For these women, we will categorize the major causes of death, hypothesizing that it will include cardiovascular failure. Next, we will evaluate the existing cardiovascular disease risk models in our population of cancer survivors, and then propose a new model to simultaneously predict a survivor's risk of death due to her breast cancer or due to cardiovascular disease, while accounting for additional competing causes of death. Lastly, model predicted outcomes will be calculated for the cohort, and evaluation methods will be applied to determine the clinical utility of such a model.

Page generated in 0.091 seconds