• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 37
  • 20
  • 8
  • 4
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 86
  • 86
  • 16
  • 16
  • 15
  • 14
  • 13
  • 13
  • 12
  • 10
  • 10
  • 9
  • 9
  • 9
  • 8
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Evaluating SLAM algorithms for Autonomous Helicopters

Skoglund, Martin January 2008 (has links)
Navigation with unmanned aerial vehicles (UAVs) requires good knowledge of the current position and other states. A UAV navigation system often uses GPS and inertial sensors in a state estimation solution. If the GPS signal is lost or corrupted state estimation must still be possible and this is where simultaneous localization and mapping (SLAM) provides a solution. SLAM considers the problem of incrementally building a consistent map of a previously unknown environment and simultaneously localize itself within this map, thus a solution does not require position from the GPS receiver. This thesis presents a visual feature based SLAM solution using a low resolution video camera, a low-cost inertial measurement unit (IMU) and a barometric pressure sensor. State estimation in made with a extended information filter (EIF) where sparseness in the information matrix is enforced with an approximation. An implementation is evaluated on real flight data and compared to a EKF-SLAM solution. Results show that both solutions provide similar estimates but the EIF is over-confident. The sparse structure is exploited, possibly not fully, making the solution nearly linear in time and storage requirements are linear in the number of features which enables evaluation for a longer period of time.
22

Reverse Engineering of Temporal Gene Expression Data Using Dynamic Bayesian Networks And Evolutionary Search

Salehi, Maryam 17 September 2008 (has links)
Capturing the mechanism of gene regulation in a living cell is essential to predict the behavior of cell in response to intercellular or extra cellular factors. Such prediction capability can potentially lead to development of improved diagnostic tests and therapeutics [21]. Amongst reverse engineering approaches that aim to model gene regulation are Dynamic Bayesian Networks (DBNs). DBNs are of particular interest as these models are capable of discovering the causal relationships between genes while dealing with noisy gene expression data. At the same time, the problem of discovering the optimum DBN model, makes structure learning of DBN a challenging topic. This is mainly due to the high dimensionality of the search space of gene expression data that makes exhaustive search strategies for identifying the best DBN structure, not practical. In this work, for the first time the application of a covariance-based evolutionary search algorithm is proposed for structure learning of DBNs. In addition, the convergence time of the proposed algorithm is improved compared to the previously reported covariance-based evolutionary search approaches. This is achieved by keeping a fixed number of good sample solutions from previous iterations. Finally, the proposed approach, M-CMA-ES, unlike gradient-based methods has a high probability to converge to a global optimum. To assess how efficient this approach works, a temporal synthetic dataset is developed. The proposed approach is then applied to this dataset as well as Brainsim dataset, a well known simulated temporal gene expression data [58]. The results indicate that the proposed method is quite efficient in reconstructing the networks in both the synthetic and Brainsim datasets. Furthermore, it outperforms other algorithms in terms of both the predicted structure accuracy and the mean square error of the reconstructed time series of gene expression data. For validation purposes, the proposed approach is also applied to a biological dataset composed of 14 cell-cycle regulated genes in yeast Saccharomyces Cerevisiae. Considering the KEGG1 pathway as the target network, the efficiency of the proposed reverse engineering approach significantly improves on the results of two previous studies of yeast cell cycle data in terms of capturing the correct interactions. / Thesis (Master, Computing) -- Queen's University, 2008-09-09 11:35:33.312
23

Contributions to statistical learning and statistical quantification in nanomaterials

Deng, Xinwei 22 June 2009 (has links)
This research focuses to develop some new techniques on statistical learning including methodology, computation and application. We also developed statistical quantification in nanomaterials. For a large number of random variables with temporal or spatial structures, we proposed shrink estimates of covariance matrix to account their Markov structures. The proposed method exploits the sparsity in the inverse covariance matrix in a systematic fashion. To deal with high dimensional data, we proposed a robust kernel principal component analysis for dimension reduction, which can extract the nonlinear structure of high dimension data more robustly. To build a prediction model more efficiently, we developed an active learning via sequential design to actively select the data points into the training set. By combining the stochastic approximation and D-optimal designs, the proposed method can build model with minimal time and effort. We also proposed factor logit-models with a large number of categories for classification. We show that the convergence rate of the classifier functions estimated from the proposed factor model does not rely on the number of categories, but only on the number of factors. It therefore can achieve better classification accuracy. For the statistical nano-quantification, a statistical approach is presented to quantify the elastic deformation of nanomaterials. We proposed a new statistical modeling technique, called sequential profile adjustment by regression (SPAR), to account for and eliminate the various experimental errors and artifacts. SPAR can automatically detect and remove the systematic errors and therefore gives more precise estimation of the elastic modulus.
24

Otimização de portfólios de investimento : a estratégia de paridade de risco no cenário brasileiro

Souza, Pierre Oberson de January 2015 (has links)
O presente trabalho busca dar início a estudos referentes ao modelo de otimização de portfolios de investimento denominado paridade de risco no cenário brasileiro. Neste trabalho, os índices setoriais da bolsa brasileira (Bovespa) foram utilizados como ativos e com os seus dados foram estimadas carteiras com os modelos de mínima variância, de pesos iguais e de paridade de risco. Verificou-se que no modelo de paridade de risco a forma de obtenção da matriz de covariância exerce pouca influência no resultado final, que é de carteiras com distribuição de pesos e volatilidades intermediárias com relação aos modelos de mínima variância e de pesos iguais. Estes resultados são condizentes com aqueles verificados na literatura que utilizam como base de dados os mercados europeus e americanos. / This paper seeks to initiate studies for the investment portfolios optimization model called risk parity in the Brazilian scene. In this work, the sector indexes of the Brazilian Stock Exchange (Bovespa) were used as assets and their data were used to estimate portfolios with models of minimum variance, of equal weight and of risk parity. It was found that in the risk parity model the form to obtain the covariance matrix has little influence on the final result, that is of a portfolio with weights and distribution of intermediate volatility in relation to the minimum variance models and equal weights. These results are consistent with those found in the literature using as database the European and American markets.
25

Otimização de portfólios de investimento : a estratégia de paridade de risco no cenário brasileiro

Souza, Pierre Oberson de January 2015 (has links)
O presente trabalho busca dar início a estudos referentes ao modelo de otimização de portfolios de investimento denominado paridade de risco no cenário brasileiro. Neste trabalho, os índices setoriais da bolsa brasileira (Bovespa) foram utilizados como ativos e com os seus dados foram estimadas carteiras com os modelos de mínima variância, de pesos iguais e de paridade de risco. Verificou-se que no modelo de paridade de risco a forma de obtenção da matriz de covariância exerce pouca influência no resultado final, que é de carteiras com distribuição de pesos e volatilidades intermediárias com relação aos modelos de mínima variância e de pesos iguais. Estes resultados são condizentes com aqueles verificados na literatura que utilizam como base de dados os mercados europeus e americanos. / This paper seeks to initiate studies for the investment portfolios optimization model called risk parity in the Brazilian scene. In this work, the sector indexes of the Brazilian Stock Exchange (Bovespa) were used as assets and their data were used to estimate portfolios with models of minimum variance, of equal weight and of risk parity. It was found that in the risk parity model the form to obtain the covariance matrix has little influence on the final result, that is of a portfolio with weights and distribution of intermediate volatility in relation to the minimum variance models and equal weights. These results are consistent with those found in the literature using as database the European and American markets.
26

Amélioration des données neutroniques de diffusion thermique et épithermique pour l'interprétation des mesures intégrales / Improvement of thermal and epithermal neutron scattering data for the integral measurements interpretation

Scotta, Juan Pablo 26 September 2017 (has links)
Dans ces travaux de thèse, la diffusion thermique des neutrons pour l’application aux réacteurs à eau légère a été étudiée. Le modèle de loi de diffusion thermique de l’hydrogène lié à la molécule d’eau de la bibliothèque de données nucléaires JEFF-3.1.1 est basée sur des mesures expérimentales réalisées dans les années soixante. La physique de diffusion de neutrons de cette bibliothèque a été comparée à un modèle basé sur les calculs de dynamique moléculaire développé au Centre Atomique de Bariloche (Argentine), à savoir le modèle CAB. L’impact de ces modèles a également été évalué sur le programme expérimental MISTRAL (configurations UOX et MOX) réalisé dans le réacteur de puissance nulle EOLE situé au CEA Cadarache (France). La contribution de la diffusion thermique des neutrons sur l’hydrogène dans l’eau a été quantifiée sur le calcul de la réactivité et sur l’erreur de calcul du coefficient de température isotherme (reactivity temperature Coefficient en anglais - RTC).Pour le réseau UOX, l’écart entre la réactivité calculée à 20 °C avec le modèle CAB et celle du JEFF-3.1.1 est de +90 pcm, tandis que pour le réseau MOX, il est de +170 pcm à cause de la sensibilité élevée de la diffusion thermique pour ce type de combustible. Dans la plage de température de 10 °C à 80 °C, l’erreur de calcul sur le RTC est de -0.27 ± 0.3 pcm/°C avec JEFF-3.1.1 et de +0.05 ± 0.3 pcm/°C avec le modèle CAB pour le réseau UOX. Pour la configuration MOX, il est de -0.98 ± 0.3 pcm/°C et -0.72 ± 0.3 pcm/°C obtenu respectivement avec la bibliothèque JEFF-3.1.1 et avec le modèle CAB. Les résultats montrent l’apport du modèle CAB dans le calcul de ce paramètre de sureté. / In the present report it was studied the neutron thermal scattering of light water for reactors application. The thermal scattering law model of hydrogen bounded to the water molecule of the JEFF-3.1.1 nuclear data library is based on experimental measures performed in the sixties. The scattering physics of this latter was compared with a model based on molecular dynamics calculations developed at the Atomic Center in Bariloche (Argentina), namely the CAB model. The impact of these models was evaluated as well on reactor calculations at cold conditions. The selected benchmark was the MISTRAL program (UOX and MOX configurations), carried out in the zero power reactor EOLE of CEA Cadarache (France). The contribution of the neutron thermal scattering of hydrogen in water was quantified in terms of the difference in the calculated reactivity and the calculation error on the isothermal reactivity temperature coefficient (RTC). For the UOX lattice, the calculated reactivity with the CAB model at 20 °C is +90 pcm larger than JEFF-3.1.1, while for the MOX lattice is +170 pcm because of the high sensitivity of thermal scattering to this type of fuels. In the temperature range from 10 °C to 80 °C, the calculation error on the RTC is -0.27 ± 0.3 pcm/°C and +0.05 ± 0.3 pcm/°C obtained with JEFF-3.1.1 and the CAB model respectively (UOX lattice). For the MOX lattice, is -0.98 ± 0.3 pcm/°C and -0.72 ± 0.3 pcm/°C obtained with the JEFF-3.1.1 library and with the CAB model respectively. The results illustrate the improvement of the CAB model in the calculation of this safety parameter.
27

Otimização de portfólios de investimento : a estratégia de paridade de risco no cenário brasileiro

Souza, Pierre Oberson de January 2015 (has links)
O presente trabalho busca dar início a estudos referentes ao modelo de otimização de portfolios de investimento denominado paridade de risco no cenário brasileiro. Neste trabalho, os índices setoriais da bolsa brasileira (Bovespa) foram utilizados como ativos e com os seus dados foram estimadas carteiras com os modelos de mínima variância, de pesos iguais e de paridade de risco. Verificou-se que no modelo de paridade de risco a forma de obtenção da matriz de covariância exerce pouca influência no resultado final, que é de carteiras com distribuição de pesos e volatilidades intermediárias com relação aos modelos de mínima variância e de pesos iguais. Estes resultados são condizentes com aqueles verificados na literatura que utilizam como base de dados os mercados europeus e americanos. / This paper seeks to initiate studies for the investment portfolios optimization model called risk parity in the Brazilian scene. In this work, the sector indexes of the Brazilian Stock Exchange (Bovespa) were used as assets and their data were used to estimate portfolios with models of minimum variance, of equal weight and of risk parity. It was found that in the risk parity model the form to obtain the covariance matrix has little influence on the final result, that is of a portfolio with weights and distribution of intermediate volatility in relation to the minimum variance models and equal weights. These results are consistent with those found in the literature using as database the European and American markets.
28

Uso de informações de parentesco e modelos mistos para avaliação e seleção de genótipos de cana-de-açúcar / Usage of kinship and mixed models for evaluation and selection of sugarcane genotypes

Edjane Gonçalves de Freitas 02 August 2013 (has links)
Nos programas de melhoramento de cana-de-açúcar todos os anos são instalados experimentos com o objetivo de avaliar genótipos que podem eventualmente ser recomendados para o plantio, ou mesmo como genitores. Este objetivo é atingido com o emprego de experimentos em diferentes locais, durante diferentes colheitas. Além disso, frequentemente há grande desbalanceamento, pois nem todos os genótipos são avaliados em todos os experimentos. O emprego de abordagens tradicionais como análise de variância conjunta (ANAVA) é inviável devido à condição de desbalanceamento e ao fato de as pressuposições não modelarem adequadamente o relacionamento entre as observações. O emprego de modelos misto utilizando a metodologia REML/BLUP é uma alternativa para análise desses experimentos em cana-deaçúcar, permitindo também incorporar a informação de parentesco entre os indivíduos. Nesse contexto, foram analisados 44 experimentos (locais) de cana-de-açúcar do programa de melhoramento da cana-de-açúcar do Instituto Agronômico de Campinas (IAC), com 74 genótipos (clones e variedades) e com até 5 colheitas. O delineamento foi o de blocos ao acaso com 2 a 6 repetições. O caráter analisado foi TPH (Tonelada de pol por hectare). Foram testados 40 modelos, os 20 primeiros foram avaliadas diferentes estrutura de VCOV para locais e colheitas, e os 20 seguintes, além das matrizes de VCOV, foi incorporada a matriz de parentesco genético, A. De acordo com AIC, verificou-se que o Modelo 11, o qual assume as matrizes FA1, AR1 e ID, para locais, colheitas e genótipos, respectivamente, foi o melhor, e portanto, o mais eficiente para seleção de genótipos superiores. Quando comparado ao modelo tradicional (médias dos experimentos), houve mudanças no ranqueamento dos genótipos. Há correlação entre o modelo tradicional e o Modelo 11 (_ = 0, 63, p-valor < 0, 001). A opção de utilizar modelo misto sem ajustar as matrizes de VCOV (Modelo 1) é relativamente melhor do que usar o Modelo Tradicional. Isto foi evidenciado pela correlação mais alta entre os modelos 1 e 11 (_ = 0, 87 com p-valor < 0, 001). Acredita-se que o emprego do Modelo 11 junto com experiência do melhorista poderá aumentar a eficiência de seleção em programas de melhoramento de cana-de-açúcar. / In breeding programs of sugarcane every year experiments are installed to evaluate the performance of genotypes, in order to select superior varieties and genitors. The use of ordinary approaches such as joint analysis of variance (ANOVA) is unfeasible due to unbalancing and assumptions that do not reflect the standard of relationship of the observations. The use of mixed models using the method REML/BLUP is an alternative. It also allows the incorporation of information from kinship between individuals. In this context, we analyzed 44 trials (locations) of sugarcane breeding program of sugarcane (Agronomic Institute Campinas, IAC), with 74 genotypes (varieties and clones), up to 5 harvests. The experimental design was randomized blocks with 2-6 replicates. The character was examined TPH (Tons of pol per hectare). We tested 40 models, the first 20 were evaluated different VCOV structure to locations and harvests, and 20 following addition of matrix VCOV was incorporated genetic relationship matrix, A. Under AIC, it was found that the model 11, which assumes matrices FA1, AR1 and ID for locations, harvests and genotypes, respectively, was the best. There is a moderate correlation between traditional model and model 11 (_ = 0.63, p-value < 0.001), when ranking the genotypes. The option of using mixed model without adjusting matrices VCOV (model 1) is better than using the traditional model. This was suggested by the higher correlation between models 1 and 11 (_ = 0.87 with p-value < 0.001). We believe that the usage of model 11 together with breeders experience can increase the efficiency of selection in sugarcane breeding programs.
29

Robust classifcation methods on the space of covariance matrices. : application to texture and polarimetric synthetic aperture radar image classification / Classification robuste sur l'espace des matrices de covariance : application à la texture et aux images de télédétection polarimétriques radar à ouverture synthétique

Ilea, Ioana 26 January 2017 (has links)
Au cours de ces dernières années, les matrices de covariance ont montré leur intérêt dans de nombreuses applications en traitement du signal et de l'image.Les travaux présentés dans cette thèse se concentrent sur l'utilisation de ces matrices comme descripteurs pour la classification. Dans ce contexte, des algorithmes robustes de classification sont proposés en développant les aspects suivants.Tout d'abord, des estimateurs robustes de la matrice de covariance sont utilisés afin de réduire l'impact des observations aberrantes. Puis, les distributions Riemannienne Gaussienne et de Laplace, ainsi que leur extension au cas des modèles de mélange, sont considérés pour la modélisation des matrices de covariance.Les algorithmes de type k-moyennes et d'espérance-maximisation sont étendus au cas Riemannien pour l'estimation de paramètres de ces lois : poids, centroïdes et paramètres de dispersion. De plus, un nouvel estimateur du centroïde est proposé en s'appuyant sur la théorie des M-estimateurs : l'estimateur de Huber. En outre,des descripteurs appelés vecteurs Riemannien de Fisher sont introduits afin de modéliser les images non-stationnaires. Enfin, un test d'hypothèse basé sur la distance géodésique est introduit pour réguler la probabilité de fausse alarme du classifieur.Toutes ces contributions sont validées en classification d'images de texture, de signaux du cerveau, et d'images polarimétriques radar simulées et réelles. / In the recent years, covariance matrices have demonstrated their interestin a wide variety of applications in signal and image processing. The workpresented in this thesis focuses on the use of covariance matrices as signatures forrobust classification. In this context, a robust classification workflow is proposed,resulting in the following contributions.First, robust covariance matrix estimators are used to reduce the impact of outlierobservations, during the estimation process. Second, the Riemannian Gaussianand Laplace distributions as well as their mixture model are considered to representthe observed covariance matrices. The k-means and expectation maximization algorithmsare then extended to the Riemannian case to estimate their parameters, thatare the mixture's weight, the central covariance matrix and the dispersion. Next,a new centroid estimator, called the Huber's centroid, is introduced based on thetheory of M-estimators. Further on, a new local descriptor named the RiemannianFisher vector is introduced to model non-stationary images. Moreover, a statisticalhypothesis test is introduced based on the geodesic distance to regulate the classification false alarm rate. In the end, the proposed methods are evaluated in thecontext of texture image classification, brain decoding, simulated and real PolSARimage classification.
30

Explicit Estimators for a Banded Covariance Matrix in a Multivariate Normal Distribution

Karlsson, Emil January 2014 (has links)
The problem of estimating mean and covariances of a multivariate normal distributedrandom vector has been studied in many forms. This thesis focuses on the estimatorsproposed in [15] for a banded covariance structure with m-dependence. It presents theprevious results of the estimator and rewrites the estimator when m = 1, thus makingit easier to analyze. This leads to an adjustment, and a proposition for an unbiasedestimator can be presented. A new and easier proof of consistency is then presented.This theory is later generalized into a general linear model where the correspondingtheorems and propositions are made to establish unbiasedness and consistency. In thelast chapter some simulations with the previous and new estimator verifies that thetheoretical results indeed makes an impact.

Page generated in 0.0542 seconds