• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 32
  • 15
  • 9
  • 4
  • 3
  • 3
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 84
  • 24
  • 24
  • 18
  • 16
  • 14
  • 10
  • 9
  • 9
  • 9
  • 7
  • 7
  • 7
  • 7
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Small holder farmers' perceptions, host plant suitability and natural enemies of the groundnut leafminer, Aproaerema modicella (Lepidoptera: Gelechiidae) in South Africa / Anchen van der Walt

Van der Walt, Anchen January 2007 (has links)
Thesis (M. Environmental Science)--North-West University, Potchefstroom Campus, 2008.
32

Sequential optimal design of neurophysiology experiments

Lewi, Jeremy 31 March 2009 (has links)
For well over 200 years, scientists and doctors have been poking and prodding brains in every which way in an effort to understand how they work. The earliest pokes were quite crude, often involving permanent forms of brain damage. Though neural injury continues to be an active area of research within neuroscience, technology has given neuroscientists a number of tools for stimulating and observing the brain in very subtle ways. Nonetheless, the basic experimental paradigm remains the same; poke the brain and see what happens. For example, neuroscientists studying the visual or auditory system can easily generate any image or sound they can imagine to see how an organism or neuron will respond. Since neuroscientists can now easily design more pokes then they could every deliver, a fundamental question is ``What pokes should they actually use?' The complexity of the brain means that only a small number of the pokes scientists can deliver will produce any information about the brain. One of the fundamental challenges of experimental neuroscience is finding the right stimulus parameters to produce an informative response in the system being studied. This thesis addresses this problem by developing algorithms to sequentially optimize neurophysiology experiments. Every experiment we conduct contains information about how the brain works. Before conducting the next experiment we should use what we have already learned to decide which experiment we should perform next. In particular, we should design an experiment which will reveal the most information about the brain. At a high level, neuroscientists already perform this type of sequential, optimal experimental design; for example crude experiments which knockout entire regions of the brain have given rise to modern experimental techniques which probe the responses of individual neurons using finely tuned stimuli. The goal of this thesis is to develop automated and rigorous methods for optimizing neurophysiology experiments efficiently and at a much finer time scale. In particular, we present methods for near instantaneous optimization of the stimulus being used to drive a neuron.
33

Modelling of spruce forest decay caused by the European spruce bark beetle in the area of Bohemian Forest using GIS

BROŽ, Zdeněk January 2016 (has links)
This thesis deals with the bark beetle population gradation which resulted in dieback of montane spruce forest in the central part of the Bohemian Forest, Czech Republic, during 1991 - 2000. A spatio-temporal model of changing land cover has been made using remote sensing and GIS methods. The statistical analyses have been made using generalized linear models (GLM). The possible effect of various conditions and environmental factors at landscape as well as the stand level has been discussed.
34

Bayesian D-Optimal Design Issues and Optimal Design Construction Methods for Generalized Linear Models with Random Blocks

January 2015 (has links)
abstract: Optimal experimental design for generalized linear models is often done using a pseudo-Bayesian approach that integrates the design criterion across a prior distribution on the parameter values. This approach ignores the lack of utility of certain models contained in the prior, and a case is demonstrated where the heavy focus on such hopeless models results in a design with poor performance and with wild swings in coverage probabilities for Wald-type confidence intervals. Design construction using a utility-based approach is shown to result in much more stable coverage probabilities in the area of greatest concern. The pseudo-Bayesian approach can be applied to the problem of optimal design construction under dependent observations. Often, correlation between observations exists due to restrictions on randomization. Several techniques for optimal design construction are proposed in the case of the conditional response distribution being a natural exponential family member but with a normally distributed block effect . The reviewed pseudo-Bayesian approach is compared to an approach based on substituting the marginal likelihood with the joint likelihood and an approach based on projections of the score function (often called quasi-likelihood). These approaches are compared for several models with normal, Poisson, and binomial conditional response distributions via the true determinant of the expected Fisher information matrix where the dispersion of the random blocks is considered a nuisance parameter. A case study using the developed methods is performed. The joint and quasi-likelihood methods are then extended to address the case when the magnitude of random block dispersion is of concern. Again, a simulation study over several models is performed, followed by a case study when the conditional response distribution is a Poisson distribution. / Dissertation/Thesis / Doctoral Dissertation Industrial Engineering 2015
35

Modelos para dados de contagem com superdispersão: uma aplicação em um experimento agronômico / Models for count data with overdispersion: application in an agronomic experiment

Douglas Toledo Batista 26 June 2015 (has links)
O modelo de referência para dados de contagem é o modelo de Poisson. A principal característica do modelo de Poisson é a pressuposição de que a média e a variância são iguais. No entanto, essa relação de média-variância nem sempre ocorre em dados observacionais. Muitas vezes, a variância observada nos dados é maior do que a variância esperada, fenômeno este conhecido como superdispersão. O objetivo deste trabalho constitui-se na aplicação de modelos lineares generalizados, a fim de selecionar um modelo adequado para acomodar de forma satisfatória a superdispersão presente em dados de contagem. Os dados provêm de um experimento que objetivava avaliar e caracterizar os parâmetros envolvidos no florescimento de plantas adultas da laranjeira variedade \"x11\", enxertadas nos limoeiros das variedades \"Cravo\" e \"Swingle\". Primeiramente ajustou-se o modelo de Poisson com função de ligação canônica. Por meio da deviance, estatística X2 de Pearson e do gráfico half-normal plot observou-se forte evidência de superdispersão. Utilizou-se, então, como modelos alternativos ao Poisson, os modelos Binomial Negativo e Quase-Poisson. Verificou que o modelo Quase-Poisson foi o que melhor se ajustou aos dados, permitindo fazer inferências mais precisas e interpretações práticas para os parâmetros do modelo. / The reference model for count data is the Poisson model. The main feature of Poisson model is the assumption that mean and variance are equal. However, this mean-variance relationship rarely occurs in observational data. Often, the observed variance is greater than the expected variance, a phenomenon known as overdispersion. The aim of this work is the application of generalized linear models, in order to select an appropriated model to satisfactorily accommodate the overdispersion present in the data. The data come from an experiment that aimed to evaluate and characterize the parameters involved in the flowering of orange adult plants of the variety \"x11\" grafted on \"Cravo\" and \"Swingle\". First, the data were submitted to adjust by Poisson model with canonical link function. Using deviance, generalized Pearson chi-squared statistic and half-normal plots, it was possible to notice strong evidence of overdispersion. Thus, alternative models to Poisson were used such as the negative binomial and Quasi-Poisson models. The Quasi-Poisson model presented the best fit to the data, allowing more accurate inferences and practices interpretations for the parameters.
36

A Statistical Analysis of the Lake Levels at Lake Neusiedl

Leodolter, Johannes January 2008 (has links) (PDF)
A long record of daily data is used to study the lake levels of Lake Neusiedl, a large steppe lake at the eastern border of Austria. Daily lake level changes are modeled as functions of precipitation, temperature, and wind conditions. The occurrence and the amount of daily precipitation are modeled with logistic regressions and generalized linear models.
37

Analýza závislosti sociální situace na úrovni transferů a dalších faktorech / Analysis of social situations depending on the level of transfers and other factors

Harudová, Jana January 2017 (has links)
This diploma thesis deals with the social policies of the European Union and with poverty. Social policies are divided into five social models, based on basic typologies. Individual social models are characterized separately and the claims are supported by appropriate economic indicators. The practical part builds on these theoretical foundations and examines the dependence of variables and social models. Based on the indicators of poverty, the social situation of the individual states of the European Union is defined. Dependencies were created using indicators of poverty and economic variables. A generalized linear model was designed to determine the dependence of the social situation in the EU based on selected factors.
38

Subsampling Strategies for Bayesian Variable Selection and Model Averaging in GLM and BGNLM

Lachmann, Jon January 2021 (has links)
Bayesian Generalized Nonlinear Models (BGNLM) offer a flexible alternative to GLM while still providing better interpretability than machine learning techniques such as neural networks. In BGNLM, the methods of Bayesian Variable Selection and Model Averaging are applied in an extended GLM setting. Models are fitted to data using MCMC within a genetic framework in an algorithm called GMJMCMC. In this thesis, we present a new implementation of the algorithm as a package in the programming language R. We also present a novel algorithm called S-IRLS-SGD for estimating the MLE of a GLM by subsampling the data. Finally, we present some theory combining the novel algorithm with GMJMCMC/MJMCMC/MCMC and a number of experiments demonstrating the performance of the contributed algorithm.
39

Vizualizace a export výstupů funkční magnetické rezonance / Visualization and export outputs from functional magnetic resonance imaging

Přibyl, Jakub January 2015 (has links)
Thesis discusses the principles and methodology for measuring functional magnetic resonance imaging (fMRI), basically the origin and use of BOLD signal types used experiments. Further attention is paid fMRI data processing and statistical analysis. Subsequent chapters are devoted to a brief description of the most common software tools used to analyze data from fMRI. The main section was to create a program in MATLAB with a detailed graphic user interface for easy visualization and export output from analyzes of fMRI data. The second half is devoted to describing the program developer and graphic user interface, including key functionality. The final section describes the application program with real data from clinical studies of dynamic connectivity and use in an international project APGem.
40

Hledání korelátů změn tepové frekvence v fMRI datech / Correlates finding of heart rate changes in fMRI data

Jurečková, Kateřina January 2017 (has links)
This master’s thesis deals with problematic of correlates finding of heart rate changes in fMRI data. The first part describes principle of fMRI, creation of BOLD signal, data acquisition, their pre-processing and analysis. The next part describes heart rate variability and its impact on fMRI data. The following section is dedicated to pre-processing of heart rate time series to the form, which can be used in correlates finding of heart rate variability and fMRI data with generalized linear model. The process of statistical testing and its result with discussion can be found in the last part of this thesis.

Page generated in 0.0452 seconds