• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 37
  • 9
  • 4
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 75
  • 75
  • 41
  • 34
  • 26
  • 24
  • 20
  • 16
  • 13
  • 12
  • 12
  • 11
  • 11
  • 10
  • 10
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

Modelagem de sistemas dinamicos não lineares utilizando sistemas fuzzy, algoritmos geneticos e funções de base ortonormal / Modeling of nonlinear dynamics systems using fuzzy systems, genetic algorithms and orthonormal basis functions

Medeiros, Anderson Vinicius de 23 January 2006 (has links)
Orientadores: Wagner Caradori do Amaral, Ricardo Jose Gabrielli Barreto Campello / Dissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Eletrica e de Computação / Made available in DSpace on 2018-08-06T08:36:39Z (GMT). No. of bitstreams: 1 Medeiros_AndersonViniciusde_M.pdf: 896535 bytes, checksum: 48d0d75d38fcbbd0f47f7c49823806f1 (MD5) Previous issue date: 2006 / Resumo: Esta dissertação apresenta uma metodologia para a geração e otimização de modelos fuzzy Takagi-Sugeno (TS) com Funções de Base Ortonormal (FBO) para sistemas dinâmicos não lineares utilizando um algoritmo genético. Funções de base ortonormal têm sido utilizadas por proporcionarem aos modelos propriedades como ausência de recursão da saída e possibilidade de se alcançar uma razoável capacidade de representação com poucos parâmetros. Modelos fuzzy TS agregam a essas propriedades as características de interpretabilidade e facilidade de representação do conhecimento. Enfim, os algoritmos genéticos se apresentam como um método bem estabelecido na literatura na tarefa de sintonia de parâmetros de modelos fuzzy TS. Diante disso, desenvolveu-se um algoritmo genético para a otimização de duas arquiteturas, o modelo fuzzy TS FBO e sua extensão, o modelo fuzzy TS FBO Generalizado. Foram analisados modelos locais lineares e não lineares nos conseqüentes das regras fuzzy, assim como a diferença entre a estimação local e a global (utilizando o estimador de mínimos quadrados) dos parâmetros desses modelos locais. No algoritmo genético, cada arquitetura contou com uma representação cromossômica específica. Elaborou-se para ambas uma função de fitness baseada no critério de Akaike. Em relação aos operadores de reprodução, no operador de crossover aritmético foi introduzida uma alteração para a manutenção da diversidade da população e no operador de mutação gaussiana adotou-se uma distribuição variável ao longo das gerações e diferenciada para cada gene. Introduziu-se ainda um método de simplificação de soluções através de medidas de similaridade para a primeira arquitetura citada. A metodologia foi avaliada na tarefa de modelagem de dois sistemas dinâmicos não lineares: um processo de polimerização e um levitador magnético / Abstract: This work introduces a methodology for the generation and optimization of Takagi-Sugeno (TS) fuzzy models with Orthonormal Basis Functions (OBF) for nonlinear dynamic systems based on a genetic algorithm. Orthonormal basis functions have been used because they provide models with properties like absence of output feedback and the possibility to reach a reasonable approximation capability with just a few parameters. TS fuzzy models aggregate to these properties the characteristics of interpretability and easiness to knowledge representation in a linguistic manner. Genetic algorithms appear as a well-established method for tuning parameters of TS fuzzy models. In this context, it was developed a genetic algorithm for the optimization of two architectures, the OBF TS fuzzy model and its extension, the Generalized OBF TS fuzzy model. Local linear and nonlinear models in the consequent of the fuzzy rules were analyzed, as well as the difference between local and global estimation (using least squares estimation) of the parameters of these local models. Each architecture had a specific chromosome representation in the genetic algorithm. It was developed a fitness function based on the Akaike information criterion. With respect to the genetic operators, the arithmetic crossover was modified in order to maintain the population diversity and the Gaussian mutation had its distribution varied along the generations and differentiated for each gene. Besides, it was used, in the first architecture presented, a method for simplifying the solutions by using similarity measures. The whole methodology was evaluated in modeling two nonlinear dynamic systems, a polymerization process and a magnetic levitator / Mestrado / Automação / Mestre em Engenharia Elétrica
62

Mensuração da biomassa e construção de modelos para construção de equações de biomassa / Biomass measurement and models selection for biomass equations

Edgar de Souza Vismara 07 May 2009 (has links)
O interesse pela quantificação da biomassa florestal vem crescendo muito nos últimos anos, sendo este crescimento relacionado diretamente ao potencial que as florestas tem em acumular carbono atmosférico na sua biomassa. A biomassa florestal pode ser acessada diretamente, por meio de inventário, ou através de modelos empíricos de predição. A construção de modelos de predição de biomassa envolve a mensuração das variáveis e o ajuste e seleção de modelos estatísticos. A partir de uma amostra destrutiva de de 200 indivíduos de dez essências florestais distintas advindos da região de Linhares, ES., foram construídos modelos de predição empíricos de biomassa aérea visando futuro uso em projetos de reflorestamento. O processo de construção dos modelos consistiu de uma análise das técnicas de obtenção dos dados e de ajuste dos modelos, bem como de uma análise dos processos de seleção destes a partir do critério de Informação de Akaike (AIC). No processo de obtenção dos dados foram testadas a técnica volumétrica e a técnica gravimétrica, a partir da coleta de cinco discos de madeira por árvore, em posições distintas no lenho. Na técnica gravimétrica, estudou-se diferentes técnicas de composição do teor de umidade dos discos para determinação da biomassa, concluindo-se como a melhor a que utiliza a média aritmética dos discos da base, meio e topo. Na técnica volumétrica, estudou-se diferentes técnicas de composição da densidade do tronco com base nas densidades básicas dos discos, concluindo-se que em termos de densidade do tronco, a média aritmética das densidades básicas dos cinco discos se mostrou como melhor técnica. Entretanto, quando se multiplica a densidade do tronco pelo volume deste para obtenção da biomassa, a utilização da densidade básica do disco do meio se mostrou superior a todas as técnicas. A utilização de uma densidade básica média da espécie para determinação da biomassa, via técnica volumétrica, se apresentou como uma abordagem inferior a qualquer técnica que utiliza informação da densidade do tronco das árvores individualmente. Por fim, sete modelos de predição de biomassa aérea de árvores considerando seus diferentes compartimentos foram ajustados, a partir das funções de Spurr e Schumacher-Hall, com e sem a inclusão da altura como variável preditora. Destes modelos, quatro eram gaussianos e três eram lognormais. Estes mesmos sete modelos foram ajustados incluindo a medida de penetração como variável preditora, totalizando quatorze modelos testados. O modelo de Schumacher-Hall se mostrou, de maneira geral, superior ao modelo de Spurr. A altura só se mostrou efetiva na explicação da biomassa das árvores quando em conjunto com a medida de penetração. Os modelos selecionados foram do grupo que incluíram a medida de penetração no lenho como variável preditora e , exceto o modelo de predição da biomassa de folhas, todos se mostraram adequados para aplicação na predição da biomassa aérea em áreas de reflorestamento. / Forest biomass measurement implies a destructive procedure, thus forest inventories and biomass surveys apply indirect procedure for the determination of biomass of the different components of the forest (wood, branches, leaves, roots, etc.). The usual approch consists in taking a destructive sample for the measurment of trees attributes and an empirical relationship is established between the biomass and other attributes that can be directly measured on standing trees, e.g., stem diameter and tree height. The biomass determination of felled trees can be achived by two techniques: the gravimetric technique, that weights the components in the field and take a sample for the determination of water content in the laboratory; and the volumetric technique, that determines the volume of the component in the field and take a sample for the determination of the wood specific gravity (wood basic density) in the laboratory. The gravimetric technique applies to all components of the trees, while the volumetric technique is usually restricted to the stem and large branches. In this study, these two techniques are studied in a sample fo 200 trees of 10 different species from the region of Linhares, ES. In each tree, 5 cross-sections of the stem were taken to investigate the best procedure for the determination of water content in gravimetric technique and for determination of the wood specific gravity in the volumetric technique. Also, Akaike Information Criterion (AIC) was used to compare different statistical models for the prediction o tree biomass. For the stem water content determination, the best procedure as the aritmetic mean of the water content from the cross-sections in the base, middle and top of the stem. In the determination of wood specific gravity, the best procedure was the aritmetic mean of all five cross-sections discs of the stem, however, for the determination of the biomass, i.e., the product of stem volume and wood specific gravity, the best procedure was the use of the middle stem cross-section disc wood specific gravity. The use of an average wood specific gravity by species showed worse results than any procedure that used information of wood specific gravity at individual tree level. Seven models, as variations of Spurr and Schumacher-Hall volume equation models, were tested for the different tree components: wood (stem and large branches), little branches, leaves and total biomass. In general, Schumacher-Hall models were better than Spurr based models, and models that included only diameter (DBH) information performed better than models with diameter and height measurements. When a measure of penetration in the wood, as a surrogate of wood density, was added to the models, the models with the three variables: diameter, height and penetration, became the best models.
63

Program pro analýzu ekonomických dat užitím matematického modelování v Maple / Program for Analyzing Economical Data via Mathematical Modeling in Maple

Žigárdy, Martin January 2010 (has links)
In this diploma thesis I constructed generally usable program for processing economical data through mathematical methods of linear regression in Maple system. Program is used for trend dependency analysis of examined quantities. Via multi-stage algorithmization and implementation of information criterion I created interactive form with user-friendly interface with possibility of straight data import from office suite applications. Functionality of this program is verified on example with specific data collection.
64

Dynamic prediction of repair costs in heavy-duty trucks

Saigiridharan, Lakshidaa January 2020 (has links)
Pricing of repair and maintenance (R&M) contracts is one among the most important processes carried out at Scania. Predictions of repair costs at Scania are carried out using experience-based prediction methods which do not involve statistical methods for the computation of average repair costs for contracts terminated in the recent past. This method is difficult to apply for a reference population of rigid Scania trucks. Hence, the purpose of this study is to perform suitable statistical modelling to predict repair costs of four variants of rigid Scania trucks. The study gathers repair data from multiple sources and performs feature selection using the Akaike Information Criterion (AIC) to extract the most significant features that influence repair costs corresponding to each truck variant. The study proved to show that the inclusion of operational features as a factor could further influence the pricing of contracts. The hurdle Gamma model, which is widely used to handle zero inflations in Generalized Linear Models (GLMs), is used to train the data which consists of numerous zero and non-zero values. Due to the inherent hierarchical structure within the data expressed by individual chassis, a hierarchical hurdle Gamma model is also implemented. These two statistical models are found to perform much better than the experience-based prediction method. This evaluation is done using the mean absolute error (MAE) and root mean square error (RMSE) statistics. A final model comparison is conducted using the AIC to draw conclusions based on the goodness of fit and predictive performance of the two statistical models. On assessing the models using these statistics, the hierarchical hurdle Gamma model was found to perform predictions the best
65

Modélisation des bi-grappes et sélection des variables pour des données de grande dimension : application aux données d’expression génétique

Chekouo Tekougang, Thierry 08 1900 (has links)
Les simulations ont été implémentées avec le programme Java. / Le regroupement des données est une méthode classique pour analyser les matrices d'expression génétiques. Lorsque le regroupement est appliqué sur les lignes (gènes), chaque colonne (conditions expérimentales) appartient à toutes les grappes obtenues. Cependant, il est souvent observé que des sous-groupes de gènes sont seulement co-régulés (i.e. avec les expressions similaires) sous un sous-groupe de conditions. Ainsi, les techniques de bi-regroupement ont été proposées pour révéler ces sous-matrices des gènes et conditions. Un bi-regroupement est donc un regroupement simultané des lignes et des colonnes d'une matrice de données. La plupart des algorithmes de bi-regroupement proposés dans la littérature n'ont pas de fondement statistique. Cependant, il est intéressant de porter une attention sur les modèles sous-jacents à ces algorithmes et de développer des modèles statistiques permettant d'obtenir des bi-grappes significatives. Dans cette thèse, nous faisons une revue de littérature sur les algorithmes qui semblent être les plus populaires. Nous groupons ces algorithmes en fonction du type d'homogénéité dans la bi-grappe et du type d'imbrication que l'on peut rencontrer. Nous mettons en lumière les modèles statistiques qui peuvent justifier ces algorithmes. Il s'avère que certaines techniques peuvent être justifiées dans un contexte bayésien. Nous développons une extension du modèle à carreaux (plaid) de bi-regroupement dans un cadre bayésien et nous proposons une mesure de la complexité du bi-regroupement. Le critère d'information de déviance (DIC) est utilisé pour choisir le nombre de bi-grappes. Les études sur les données d'expression génétiques et les données simulées ont produit des résultats satisfaisants. À notre connaissance, les algorithmes de bi-regroupement supposent que les gènes et les conditions expérimentales sont des entités indépendantes. Ces algorithmes n'incorporent pas de l'information biologique a priori que l'on peut avoir sur les gènes et les conditions. Nous introduisons un nouveau modèle bayésien à carreaux pour les données d'expression génétique qui intègre les connaissances biologiques et prend en compte l'interaction par paires entre les gènes et entre les conditions à travers un champ de Gibbs. La dépendance entre ces entités est faite à partir des graphes relationnels, l'un pour les gènes et l'autre pour les conditions. Le graphe des gènes et celui des conditions sont construits par les k-voisins les plus proches et permet de définir la distribution a priori des étiquettes comme des modèles auto-logistiques. Les similarités des gènes se calculent en utilisant l'ontologie des gènes (GO). L'estimation est faite par une procédure hybride qui mixe les MCMC avec une variante de l'algorithme de Wang-Landau. Les expériences sur les données simulées et réelles montrent la performance de notre approche. Il est à noter qu'il peut exister plusieurs variables de bruit dans les données à micro-puces, c'est-à-dire des variables qui ne sont pas capables de discriminer les groupes. Ces variables peuvent masquer la vraie structure du regroupement. Nous proposons un modèle inspiré de celui à carreaux qui, simultanément retrouve la vraie structure de regroupement et identifie les variables discriminantes. Ce problème est traité en utilisant un vecteur latent binaire, donc l'estimation est obtenue via l'algorithme EM de Monte Carlo. L'importance échantillonnale est utilisée pour réduire le coût computationnel de l'échantillonnage Monte Carlo à chaque étape de l'algorithme EM. Nous proposons un nouveau modèle pour résoudre le problème. Il suppose une superposition additive des grappes, c'est-à-dire qu'une observation peut être expliquée par plus d'une seule grappe. Les exemples numériques démontrent l'utilité de nos méthodes en terme de sélection de variables et de regroupement. / Clustering is a classical method to analyse gene expression data. When applied to the rows (e.g. genes), each column belongs to all clusters. However, it is often observed that the genes of a subset of genes are co-regulated and co-expressed in a subset of conditions, but behave almost independently under other conditions. For these reasons, biclustering techniques have been proposed to look for sub-matrices of a data matrix. Biclustering is a simultaneous clustering of rows and columns of a data matrix. Most of the biclustering algorithms proposed in the literature have no statistical foundation. It is interesting to pay attention to the underlying models of these algorithms and develop statistical models to obtain significant biclusters. In this thesis, we review some biclustering algorithms that seem to be most popular. We group these algorithms in accordance to the type of homogeneity in the bicluster and the type of overlapping that may be encountered. We shed light on statistical models that can justify these algorithms. It turns out that some techniques can be justified in a Bayesian framework. We develop an extension of the biclustering plaid model in a Bayesian framework and we propose a measure of complexity for biclustering. The deviance information criterion (DIC) is used to select the number of biclusters. Studies on gene expression data and simulated data give satisfactory results. To our knowledge, the biclustering algorithms assume that genes and experimental conditions are independent entities. These algorithms do not incorporate prior biological information that could be available on genes and conditions. We introduce a new Bayesian plaid model for gene expression data which integrates biological knowledge and takes into account the pairwise interactions between genes and between conditions via a Gibbs field. Dependence between these entities is made from relational graphs, one for genes and another for conditions. The graph of the genes and conditions is constructed by the k-nearest neighbors and allows to define a priori distribution of labels as auto-logistic models. The similarities of genes are calculated using gene ontology (GO). To estimate the parameters, we adopt a hybrid procedure that mixes MCMC with a variant of the Wang-Landau algorithm. Experiments on simulated and real data show the performance of our approach. It should be noted that there may be several variables of noise in microarray data. These variables may mask the true structure of the clustering. Inspired by the plaid model, we propose a model that simultaneously finds the true clustering structure and identifies discriminating variables. We propose a new model to solve the problem. It assumes that an observation can be explained by more than one cluster. This problem is addressed by using a binary latent vector, so the estimation is obtained via the Monte Carlo EM algorithm. Importance Sampling is used to reduce the computational cost of the Monte Carlo sampling at each step of the EM algorithm. Numerical examples demonstrate the usefulness of these methods in terms of variable selection and clustering.
66

Modélisation des bi-grappes et sélection des variables pour des données de grande dimension : application aux données d’expression génétique

Chekouo Tekougang, Thierry 08 1900 (has links)
Le regroupement des données est une méthode classique pour analyser les matrices d'expression génétiques. Lorsque le regroupement est appliqué sur les lignes (gènes), chaque colonne (conditions expérimentales) appartient à toutes les grappes obtenues. Cependant, il est souvent observé que des sous-groupes de gènes sont seulement co-régulés (i.e. avec les expressions similaires) sous un sous-groupe de conditions. Ainsi, les techniques de bi-regroupement ont été proposées pour révéler ces sous-matrices des gènes et conditions. Un bi-regroupement est donc un regroupement simultané des lignes et des colonnes d'une matrice de données. La plupart des algorithmes de bi-regroupement proposés dans la littérature n'ont pas de fondement statistique. Cependant, il est intéressant de porter une attention sur les modèles sous-jacents à ces algorithmes et de développer des modèles statistiques permettant d'obtenir des bi-grappes significatives. Dans cette thèse, nous faisons une revue de littérature sur les algorithmes qui semblent être les plus populaires. Nous groupons ces algorithmes en fonction du type d'homogénéité dans la bi-grappe et du type d'imbrication que l'on peut rencontrer. Nous mettons en lumière les modèles statistiques qui peuvent justifier ces algorithmes. Il s'avère que certaines techniques peuvent être justifiées dans un contexte bayésien. Nous développons une extension du modèle à carreaux (plaid) de bi-regroupement dans un cadre bayésien et nous proposons une mesure de la complexité du bi-regroupement. Le critère d'information de déviance (DIC) est utilisé pour choisir le nombre de bi-grappes. Les études sur les données d'expression génétiques et les données simulées ont produit des résultats satisfaisants. À notre connaissance, les algorithmes de bi-regroupement supposent que les gènes et les conditions expérimentales sont des entités indépendantes. Ces algorithmes n'incorporent pas de l'information biologique a priori que l'on peut avoir sur les gènes et les conditions. Nous introduisons un nouveau modèle bayésien à carreaux pour les données d'expression génétique qui intègre les connaissances biologiques et prend en compte l'interaction par paires entre les gènes et entre les conditions à travers un champ de Gibbs. La dépendance entre ces entités est faite à partir des graphes relationnels, l'un pour les gènes et l'autre pour les conditions. Le graphe des gènes et celui des conditions sont construits par les k-voisins les plus proches et permet de définir la distribution a priori des étiquettes comme des modèles auto-logistiques. Les similarités des gènes se calculent en utilisant l'ontologie des gènes (GO). L'estimation est faite par une procédure hybride qui mixe les MCMC avec une variante de l'algorithme de Wang-Landau. Les expériences sur les données simulées et réelles montrent la performance de notre approche. Il est à noter qu'il peut exister plusieurs variables de bruit dans les données à micro-puces, c'est-à-dire des variables qui ne sont pas capables de discriminer les groupes. Ces variables peuvent masquer la vraie structure du regroupement. Nous proposons un modèle inspiré de celui à carreaux qui, simultanément retrouve la vraie structure de regroupement et identifie les variables discriminantes. Ce problème est traité en utilisant un vecteur latent binaire, donc l'estimation est obtenue via l'algorithme EM de Monte Carlo. L'importance échantillonnale est utilisée pour réduire le coût computationnel de l'échantillonnage Monte Carlo à chaque étape de l'algorithme EM. Nous proposons un nouveau modèle pour résoudre le problème. Il suppose une superposition additive des grappes, c'est-à-dire qu'une observation peut être expliquée par plus d'une seule grappe. Les exemples numériques démontrent l'utilité de nos méthodes en terme de sélection de variables et de regroupement. / Clustering is a classical method to analyse gene expression data. When applied to the rows (e.g. genes), each column belongs to all clusters. However, it is often observed that the genes of a subset of genes are co-regulated and co-expressed in a subset of conditions, but behave almost independently under other conditions. For these reasons, biclustering techniques have been proposed to look for sub-matrices of a data matrix. Biclustering is a simultaneous clustering of rows and columns of a data matrix. Most of the biclustering algorithms proposed in the literature have no statistical foundation. It is interesting to pay attention to the underlying models of these algorithms and develop statistical models to obtain significant biclusters. In this thesis, we review some biclustering algorithms that seem to be most popular. We group these algorithms in accordance to the type of homogeneity in the bicluster and the type of overlapping that may be encountered. We shed light on statistical models that can justify these algorithms. It turns out that some techniques can be justified in a Bayesian framework. We develop an extension of the biclustering plaid model in a Bayesian framework and we propose a measure of complexity for biclustering. The deviance information criterion (DIC) is used to select the number of biclusters. Studies on gene expression data and simulated data give satisfactory results. To our knowledge, the biclustering algorithms assume that genes and experimental conditions are independent entities. These algorithms do not incorporate prior biological information that could be available on genes and conditions. We introduce a new Bayesian plaid model for gene expression data which integrates biological knowledge and takes into account the pairwise interactions between genes and between conditions via a Gibbs field. Dependence between these entities is made from relational graphs, one for genes and another for conditions. The graph of the genes and conditions is constructed by the k-nearest neighbors and allows to define a priori distribution of labels as auto-logistic models. The similarities of genes are calculated using gene ontology (GO). To estimate the parameters, we adopt a hybrid procedure that mixes MCMC with a variant of the Wang-Landau algorithm. Experiments on simulated and real data show the performance of our approach. It should be noted that there may be several variables of noise in microarray data. These variables may mask the true structure of the clustering. Inspired by the plaid model, we propose a model that simultaneously finds the true clustering structure and identifies discriminating variables. We propose a new model to solve the problem. It assumes that an observation can be explained by more than one cluster. This problem is addressed by using a binary latent vector, so the estimation is obtained via the Monte Carlo EM algorithm. Importance Sampling is used to reduce the computational cost of the Monte Carlo sampling at each step of the EM algorithm. Numerical examples demonstrate the usefulness of these methods in terms of variable selection and clustering. / Les simulations ont été implémentées avec le programme Java.
67

Comparing generalized additive neural networks with multilayer perceptrons / Johannes Christiaan Goosen

Goosen, Johannes Christiaan January 2011 (has links)
In this dissertation, generalized additive neural networks (GANNs) and multilayer perceptrons (MLPs) are studied and compared as prediction techniques. MLPs are the most widely used type of artificial neural network (ANN), but are considered black boxes with regard to interpretability. There is currently no simple a priori method to determine the number of hidden neurons in each of the hidden layers of ANNs. Guidelines exist that are either heuristic or based on simulations that are derived from limited experiments. A modified version of the neural network construction with cross–validation samples (N2C2S) algorithm is therefore implemented and utilized to construct good MLP models. This algorithm enables the comparison with GANN models. GANNs are a relatively new type of ANN, based on the generalized additive model. The architecture of a GANN is less complex compared to MLPs and results can be interpreted with a graphical method, called the partial residual plot. A GANN consists of an input layer where each of the input nodes has its own MLP with one hidden layer. Originally, GANNs were constructed by interpreting partial residual plots. This method is time consuming and subjective, which may lead to the creation of suboptimal models. Consequently, an automated construction algorithm for GANNs was created and implemented in the SAS R statistical language. This system was called AutoGANN and is used to create good GANN models. A number of experiments are conducted on five publicly available data sets to gain insight into the similarities and differences between GANN and MLP models. The data sets include regression and classification tasks. In–sample model selection with the SBC model selection criterion and out–of–sample model selection with the average validation error as model selection criterion are performed. The models created are compared in terms of predictive accuracy, model complexity, comprehensibility, ease of construction and utility. The results show that the choice of model is highly dependent on the problem, as no single model always outperforms the other in terms of predictive accuracy. GANNs may be suggested for problems where interpretability of the results is important. The time taken to construct good MLP models by the modified N2C2S algorithm may be shorter than the time to build good GANN models by the automated construction algorithm / Thesis (M.Sc. (Computer Science))--North-West University, Potchefstroom Campus, 2011.
68

Comparing generalized additive neural networks with multilayer perceptrons / Johannes Christiaan Goosen

Goosen, Johannes Christiaan January 2011 (has links)
In this dissertation, generalized additive neural networks (GANNs) and multilayer perceptrons (MLPs) are studied and compared as prediction techniques. MLPs are the most widely used type of artificial neural network (ANN), but are considered black boxes with regard to interpretability. There is currently no simple a priori method to determine the number of hidden neurons in each of the hidden layers of ANNs. Guidelines exist that are either heuristic or based on simulations that are derived from limited experiments. A modified version of the neural network construction with cross–validation samples (N2C2S) algorithm is therefore implemented and utilized to construct good MLP models. This algorithm enables the comparison with GANN models. GANNs are a relatively new type of ANN, based on the generalized additive model. The architecture of a GANN is less complex compared to MLPs and results can be interpreted with a graphical method, called the partial residual plot. A GANN consists of an input layer where each of the input nodes has its own MLP with one hidden layer. Originally, GANNs were constructed by interpreting partial residual plots. This method is time consuming and subjective, which may lead to the creation of suboptimal models. Consequently, an automated construction algorithm for GANNs was created and implemented in the SAS R statistical language. This system was called AutoGANN and is used to create good GANN models. A number of experiments are conducted on five publicly available data sets to gain insight into the similarities and differences between GANN and MLP models. The data sets include regression and classification tasks. In–sample model selection with the SBC model selection criterion and out–of–sample model selection with the average validation error as model selection criterion are performed. The models created are compared in terms of predictive accuracy, model complexity, comprehensibility, ease of construction and utility. The results show that the choice of model is highly dependent on the problem, as no single model always outperforms the other in terms of predictive accuracy. GANNs may be suggested for problems where interpretability of the results is important. The time taken to construct good MLP models by the modified N2C2S algorithm may be shorter than the time to build good GANN models by the automated construction algorithm / Thesis (M.Sc. (Computer Science))--North-West University, Potchefstroom Campus, 2011.
69

Ecological and Edaphic Correlations of Soil Invertebrate Community Structure in Dry Upland Forests of Eastern Africa

Mauritsson, Karl January 2018 (has links)
Natural forests are characterised by great vegetation diversity and create habitats for a major part of Earth’s terrestrial organisms. Plantation forests, which are mainly composed of a few genera of fast-growing trees, constitute an increasing fraction of global forests, but they only partly compensate for loss of area, habitat and ecological functions in natural forests. Plantation forests established near natural forests can be expected to serve as buffers, but they seem to be relatively poor in invertebrate species and it is not clear why. This bachelor’s degree project aimed at establishing the ecological and edaphic factors that correlate with soil invertebrate diversity in dry upland forests and surrounding plantation forests in eastern Africa. Some aspects of the above-ground vegetation heterogeneity were investigated since this was assumed to influence the heterogeneity of the soil environment, which is considered as critical for soil biodiversity. The obtained knowledge may be valuable in conservation activities in East African forests, which are threatened by destruction, fragmentation and exotic species. The study area was Karura Forest, a dry upland forest in Nairobi, Kenya. Three different sites were investigated; a natural forest site characterized by the indigenous tree species Brachylaena huillensis and Croton megalocarpus, and two different plantation forest sites, characterized by the exotic species Cupressus lusitanica and Eucalyptus paniculata, respectively. For each forest type, six plots were visited. Soil invertebrates were extracted from collected soil and litter samples by sieving and Berlese-Tullgren funnels. The invertebrates were identified, and the taxonomic diversity calculated at the order level. The ecological and edaphic factors, measured or calculated for each plot, were tree species diversity, ratio of exotic tree species, vertical structure of trees, vegetation cover, vegetation density, litter quality, soil pH, soil temperature and soil moisture. One-way ANOVA was used to compare soil invertebrate diversity and other variables between different forest types. Akaike’s Information Criterion and Multiple Linear Regression were used to establish linear models with variables that could explain measured variations of the diversity. There was some evidence for higher soil invertebrate diversity in natural forests than in surrounding plantation forests. The abundance of soil invertebrates was also clearly higher in natural forests, which indicates that natural forests are more important than plantation forests for conservation of soil invertebrate populations. Soil invertebrate diversity (in terms of number of orders present) was found to be influenced by forest type and litter quality. The diversity was higher at places with high amounts of coarse litter, which here is considered as more heterogenous than fine litter. The dependence on forest type was partly a consequence of differences in soil pH since Eucalyptus trees lower soil pH and thereby also soil biodiversity. No relation to heterogeneity of above-ground vegetation was found. For future conservation activities in Karura Forest Reserve it is recommended to continue removing exotic plant species and replanting indigenous trees, to prioritize the removal of Eucalyptus trees before Cypress trees, to only remove a few trees at a time and to establish ground vegetation when doing so.
70

Transformation model selection by multiple hypotheses testing

Lehmann, Rüdiger 17 October 2016 (has links) (PDF)
Transformations between different geodetic reference frames are often performed such that first the transformation parameters are determined from control points. If in the first place we do not know which of the numerous transformation models is appropriate then we can set up a multiple hypotheses test. The paper extends the common method of testing transformation parameters for significance, to the case that also constraints for such parameters are tested. This provides more flexibility when setting up such a test. One can formulate a general model with a maximum number of transformation parameters and specialize it by adding constraints to those parameters, which need to be tested. The proper test statistic in a multiple test is shown to be either the extreme normalized or the extreme studentized Lagrange multiplier. They are shown to perform superior to the more intuitive test statistics derived from misclosures. It is shown how model selection by multiple hypotheses testing relates to the use of information criteria like AICc and Mallows’ Cp, which are based on an information theoretic approach. Nevertheless, whenever comparable, the results of an exemplary computation almost coincide.

Page generated in 0.1664 seconds