• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 4
  • 1
  • Tagged with
  • 7
  • 7
  • 4
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Differential Abundance and Clustering Analysis with Empirical Bayes Shrinkage Estimation of Variance (DASEV) for Proteomics and Metabolomics Data

Huang, Zhengyan 01 January 2019 (has links)
Mass spectrometry (MS) is widely used for proteomic and metabolomic profiling of biological samples. Data obtained by MS are often zero-inflated. Those zero values are called point mass values (PMVs). Zero values can be further grouped into biological PMVs and technical PMVs. The former type is caused by the absence of components and the latter type is caused by detection limit. There is no simple solution to separate those two types of PMVs. Mixture models were developed to separate the two types of zeros apart and to perform the differential abundance analysis. However, we notice that the mixture model can be unstable when the number of non-zero values is small. In this dissertation, we propose a new differential abundance (DA) analysis method, DASEV, which applies an empirical Bayes shrinkage estimation on variance. We hypothesized that performance on variance estimation could be more robust and thus enhance the accuracy of differential abundance analysis. Disregarding the issue the mixture models have, the method has shown promising strategies to separate two types of PMVs. We adapted the mixture distribution proposed in the original mixture model design and assumed that the variances for all components follow a certain distribution. We proposed to calculate the estimated variances by borrowing information from other components via applying the assumed distribution of variance, and then re-estimate other parameters using the estimated variances. We obtained better and more stable estimations on variance, means abundances, and proportions of biological PMVs, especially where the proportion of zeros is large. Therefore, the proposed method achieved obvious improvements in DA analysis. We also propose to extend the method for clustering analysis. To our knowledge, commonly used cluster methods for MS omics data are only K-means and Hierarchical. Both methods have their own limitations while being applied to the zero-inflated data. Model-based clustering methods are widely used by researchers for various data types including zero-inflated data. We propose to use the extension (DASEV.C) as a model-based cluster method. We compared the clustering performance of DASEV.C with K-means and Hierarchical. Under certain scenarios, the proposed method returned more accurate clusters than the standard methods. We also develop an R package dasev for the proposed methods presented in this dissertation. The major functions DASEV.DA and DASEV.C in this R package aim to implement the Bayes shrinkage estimation on variance then conduct the differential abundance and cluster analysis. We designed the functions to allow the flexibility for researchers to specify certain input options.
2

Résolution de problèmes inverses en géodésie physique / On solving some inverse problems in physical geodesy

Abdelmoula, Amine 20 December 2013 (has links)
Ce travail traite de deux problèmes de grande importances en géodésie physique. Le premier porte sur la détermination du géoïde sur une zone terrestre donnée. Si la terre était une sphère homogène, la gravitation en un point, serait entièrement déterminée à partir de sa distance au centre de la terre, ou de manière équivalente, en fonction de son altitude. Comme la terre n'est ni sphérique ni homogène, il faut calculer en tout point la gravitation. A partir d'un ellipsoïde de référence, on cherche la correction à apporter à une première approximation du champ de gravitation afin d'obtenir un géoïde, c'est-à-dire une surface sur laquelle la gravitation est constante. En fait, la méthode utilisée est la méthode de collocation par moindres carrés qui sert à résoudre des grands problèmes aux moindres carrés généralisés. Le seconde partie de cette thèse concerne un problème inverse géodésique qui consiste à trouver une répartition de masses ponctuelles (caractérisées par leurs intensités et positions), de sorte que le potentiel généré par eux, se rapproche au maximum d'un potentiel donné. Sur la terre entière une fonction potentielle est généralement exprimée en termes d'harmoniques sphériques qui sont des fonctions de base à support global la sphère. L'identification du potentiel cherché se fait en résolvant un problème aux moindres carrés. Lorsque seulement une zone limitée de la Terre est étudiée, l'estimation des paramètres des points masses à l'aide des harmoniques sphériques est sujette à l'erreur, car ces fonctions de base ne sont plus orthogonales sur un domaine partiel de la sphère. Le problème de la détermination des points masses sur une zone limitée est traitée par la construction d'une base de Slepian qui est orthogonale sur le domaine limité spécifié de la sphère. Nous proposons un algorithme itératif pour la résolution numérique du problème local de détermination des masses ponctuelles et nous donnons quelques résultats sur la robustesse de ce processus de reconstruction. Nous étudions également la stabilité de ce problème relativement au bruit ajouté. Nous présentons quelques résultats numériques ainsi que leurs interprétations. / This work focuses on the study of two well-known problems in physical geodesy. The first problem concerns the determination of the geoid on a given area on the earth. If the Earth were a homogeneous sphere, the gravity at a point would be entirely determined from its distance to the center of the earth or in terms of its altitude. As the earth is neither spherical nor homogeneous, we must calculate gravity at any point. From a reference ellipsoid, we search to find the correction to a mathematical approximation of the gravitational field in order to obtain a geoid, i.e. A surface on which gravitational potential is constant. The method used is the method of least squares collocation which is the best for solving large generalized least squares problems. In the second problem, We are interested in a geodetic inverse problem that consists in finding a distribution of point masses (characterized by their intensities and positions), such that the potential generated by them best approximates a given potential field. On the whole Earth a potential function is usually expressed in terms of spherical harmonics which are basis functions with global support. The identification of the two potentials is done by solving a least-squares problem. When only a limited area of the Earth is studied, the estimation of the point-mass parameters by means of spherical harmonics is prone to error, since they are no longer orthogonal over a partial domain of the sphere. The point-mass determination problem on a limited region is treated by the construction of a Slepian basis that is orthogonal over the specified limited domain of the sphere. We propose an iterative algorithm for the numerical solution of the local point mass determination problem and give some results on the robustness of this reconstruction process. We also study the stability of this problem against added noise. Some numerical tests are presented and commented.
3

Bayesian Hierarchical Latent Model for Gene Set Analysis

Chao, Yi 13 May 2009 (has links)
Pathway is a set of genes which are predefined and serve a particular celluar or physiological function. Ranking pathways relevant to a particular phenotype can help researchers focus on a few sets of genes in pathways. In this thesis, a Bayesian hierarchical latent model was proposed using generalized linear random effects model. The advantage of the approach was that it can easily incorporate prior knowledges when the sample size was small and the number of genes was large. For the covariance matrix of a set of random variables, two Gaussian random processes were considered to construct the dependencies among genes in a pathway. One was based on the polynomial kernel and the other was based on the Gaussian kernel. Then these two kernels were compared with constant covariance matrix of the random effect by using the ratio, which was based on the joint posterior distribution with respect to each model. For mixture models, log-likelihood values were computed at different values of the mixture proportion, compared among mixtures of selected kernels and point-mass density (or constant covariance matrix). The approach was applied to a data set (Mootha et al., 2003) containing the expression profiles of type II diabetes where the motivation was to identify pathways that can discriminate between normal patients and patients with type II diabetes. / Master of Science
4

Commande prédictive d'un robot humanoïde / Model predictive control of a humanoid robot

Herdt, Andrei 27 January 2012 (has links)
L'étendue des mouvements que les robots humanoïdes peuvent réaliser est fortement limitée par des contraintes dynamiques. Une loi de commande qui ne prend pas en compte ses res- trictions, d'une manière ou autre, ne va pas réussir d'éviter une chute. La Commande Prédictive est capable de considérer les contraintes sur l'état et le contrôle de manière explicite, ce qui la rend particulièrement appropriée pour le contrôle des mouvements des robots marcheurs.Nous commençons par dévoiler la structure spécifique de ces contraintes, démontrant notamment l'importance des appuis au sol. Nous développons ensuite une condition suffisante pour l'évitement d'une chute et nous proposons une loi de commande prédictive qui y réponde. Cette formulation nous sert ensuite pour la conception des contrôleurs pratiques, capables d'un contrôle plus efficace et plus robuste des robots marcheurs humanoïdes. / The range of motions that humanoid robots are able to realize is strongly limited by inherent dynamical constraints so that any control law that does not consider these limitations, in one way or another, will fail to avoid falling. The Model Predictive Control (MPC) technique is capable of handling constraints on the state and the control explicitly, which makes it highly apt for the control of walking robots.We begin by unveiling the specific structure of these constraints, stressing especially the impor- tance of the supports on the ground. We give thereupon a sufficient condition for keeping balance and formulate an MPC law that complies with it. This formulation serves us then for the design of practicable controllers capable of more efficient and more robust control of humanoid robots.
5

Autonomous UAV Path Planning using RSS signals in Search and Rescue Operations

Anhammer, Axel, Lundeberg, Hugo January 2022 (has links)
Unmanned aerial vehicles (UAVs) have emerged as a promising technology in search and rescue operations (SAR). UAVs have the ability to provide more timely localization, thus decreasing the crucial duration of SAR operations. Previous work have demonstrated proof-of-concept in regard to localizing missing people by utilizing received signal strength (RSS) and UAVs. The localization system is based on the assumption that the missing person wears an enabled smartphone whose Wi-Fi signal can be intercepted. This thesis proposes a two-staged path planner for UAVs, utilizing RSS-signals and an initial belief regarding the missing person's location. The objective of the first stage is to locate an RSS-signal. By dividing the search area into grids, a hierarchical solution based on several Markov decision processes (MDPs) can be formulated which takes different areas probabilities into consideration. The objective of the second stage is to isolate the RSS-signal and provide a location estimate. The environment is deemed to be partially observable, and the problem is formulated as a partially observable Markov decision process (POMDP). Two different filters, a point mass filter (PMF) and a particle filter (PF), are evaluated in regard to their ability to correctly estimate the state of the environment. The state of the environment then acts as input to a deep Q-network (DQN) which selects appropriate actions for the UAV. Thus, the DQN becomes a path planner for the UAV and the trajectory it generates is compared to trajectories generated by, among others, a greedy-policy.  Results for Stage 1 demonstrate that the path generated by the MDPs prioritizes areas with higher probability, and intuitively seems very reasonable. The results also illustrate potential drawbacks with a hierarchical solution, which potentially can be addressed by considering more factors into the problem. Simulation results for Stage 2 show that both a PMF and a PF can successfully be used to estimate the state of the environment and provide an accurate localization estimate. The PMF generated slightly more accurate estimations compared to the PF. The DQN is successful in isolating the missing person's probable location, by relatively few actions. However, it only performs marginally better than the greedy policy, indicating that it may be a complicated solution to a simpler problem.
6

Three essays on the econometric analysis of high-frequency data

Malec, Peter 27 June 2013 (has links)
Diese Dissertation behandelt die ökonometrische Analyse von hochfrequenten Finanzmarktdaten. Kapitel 1 stellt einen neuen Ansatz zur Modellierung von seriell abhängigen positiven Variablen, die einen nichttrivialen Anteil an Nullwerten aufweisen, vor. Letzteres ist ein weitverbreitetes Phänomen in hochfrequenten Finanzmarktzeitreihen. Eingeführt wird eine flexible Punktmassenmischverteilung, ein maßgeschneiderter semiparametrischer Spezifikationstest sowie eine neue Art von multiplikativem Fehlermodell (MEM). Kapitel 2 beschäftigt sich mit dem Umstand, dass feste symmetrische Kerndichteschätzer eine geringe Präzision aufweisen, falls eine positive Zufallsvariable mit erheblicher Wahrscheinlichkeitsmasse nahe Null gegeben ist. Wir legen dar, dass Gammakernschätzer überlegen sind, wobei ihre relative Präzision von der genauen Form der Dichte sowie des Kerns abhängt. Wir führen einen verbesserten Gammakernschätzer sowie eine datengetriebene Methodik für die Wahl des geeigneten Typs von Gammakern ein. Kapitel 3 wendet sich der Frage nach dem Nutzen von Hochfrequenzdaten für hochdimensionale Portfolioallokationsanwendungen zu. Wir betrachten das Problem der Konstruktion von globalen Minimum-Varianz-Portfolios auf der Grundlage der Konstituenten des S&P 500. Wir zeigen auf, dass Prognosen, welche auf Hochfrequenzdaten basieren, im Vergleich zu Methoden, die tägliche Renditen verwenden, eine signifikant geringere Portfoliovolatilität implizieren. Letzteres geht mit spürbaren Nutzengewinnen aus der Sicht eines Investors mit hoher Risikoaversion einher. / In three essays, this thesis deals with the econometric analysis of financial market data sampled at intraday frequencies. Chapter 1 presents a novel approach to model serially dependent positive-valued variables realizing a nontrivial proportion of zero outcomes. This is a typical phenomenon in financial high-frequency time series. We introduce a flexible point-mass mixture distribution, a tailor-made semiparametric specification test and a new type of multiplicative error model (MEM). Chapter 2 addresses the problem that fixed symmetric kernel density estimators exhibit low precision for positive-valued variables with a large probability mass near zero, which is common in high-frequency data. We show that gamma kernel estimators are superior, while their relative performance depends on the specific density and kernel shape. We suggest a refined gamma kernel and a data-driven method for choosing the appropriate type of gamma kernel estimator. Chapter 3 turns to the debate about the merits of high-frequency data in large-scale portfolio allocation. We consider the problem of constructing global minimum variance portfolios based on the constituents of the S&P 500. We show that forecasts based on high-frequency data can yield a significantly lower portfolio volatility than approaches using daily returns, implying noticeable utility gains for a risk-averse investor.
7

Implementation and comparison of the Aircraft Intent Description Language and point-mass Non-Linear Dynamic Inversion approach to aircraft modelling in Modelica

Shreepal, Arcot Manjunath, Vijaya Kumar, Shree Harsha January 2021 (has links)
The study is conducted to determine practical modelling and simulation techniques to perform dynamic stability and performance analysis on a 3 Degrees of freedom aircraft model using a Modelica-based commercial tool called Modelon Impact. This study is based on a conceptual aircraft model where in-depth details about the aircraft configuration are unknown and the aim is to determine a suitable model that can capture the longitudinal dynamics and aerodynamic constraints of the aircraft during the conceptual design phase. Requirements include short execution time, easy model development, and minimal data requirements. Therefore, this thesis aims at developing plant and control architectures in  Modelon Impact which can be utilized for the rapid development of aircraft concepts with adequate fidelity in a longitudinal mission-based tracking environment. In a conceptual aircraft design environment, to identify a suitable methodology that mitigates the limitations of a traditional feedback controller, two methodologies are considered for comparison: Sequential DAE resolution (SDR) and Dynamic inversion (DI) control which is discussed from an object-oriented aircraft model. The advantages and shortcomings of each of the models discussed above are compared by conducting several experiments in increasing order of longitudinal mission complexity, and the most appropriate model among the two for a conceptual stage of aircraft design development is ascertained. The two methodologies discussed are compared for their level of complexity, code structure, readability, and ease of usability.

Page generated in 0.1392 seconds