• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 267
  • 88
  • 54
  • 39
  • 10
  • 7
  • 6
  • 4
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • Tagged with
  • 560
  • 134
  • 100
  • 98
  • 76
  • 70
  • 69
  • 59
  • 53
  • 48
  • 46
  • 44
  • 41
  • 37
  • 37
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
211

Data-driven approach for control performance monitoring and fault diagnosis

Yu, Jie, 1977- 23 August 2011 (has links)
Not available / text
212

Evaluating SLAM algorithms for Autonomous Helicopters

Skoglund, Martin January 2008 (has links)
Navigation with unmanned aerial vehicles (UAVs) requires good knowledge of the current position and other states. A UAV navigation system often uses GPS and inertial sensors in a state estimation solution. If the GPS signal is lost or corrupted state estimation must still be possible and this is where simultaneous localization and mapping (SLAM) provides a solution. SLAM considers the problem of incrementally building a consistent map of a previously unknown environment and simultaneously localize itself within this map, thus a solution does not require position from the GPS receiver. This thesis presents a visual feature based SLAM solution using a low resolution video camera, a low-cost inertial measurement unit (IMU) and a barometric pressure sensor. State estimation in made with a extended information filter (EIF) where sparseness in the information matrix is enforced with an approximation. An implementation is evaluated on real flight data and compared to a EKF-SLAM solution. Results show that both solutions provide similar estimates but the EIF is over-confident. The sparse structure is exploited, possibly not fully, making the solution nearly linear in time and storage requirements are linear in the number of features which enables evaluation for a longer period of time.
213

Classification models for high-dimensional data with sparsity patterns

Tillander, Annika January 2013 (has links)
Today's high-throughput data collection devices, e.g. spectrometers and gene chips, create information in abundance. However, this poses serious statistical challenges, as the number of features is usually much larger than the number of observed units.  Further, in this high-dimensional setting, only a small fraction of the features are likely to be informative for any specific project. In this thesis, three different approaches to the two-class supervised classification in this high-dimensional, low sample setting are considered. There are classifiers that are known to mitigate the issues of high-dimensionality, e.g. distance-based classifiers such as Naive Bayes. However, these classifiers are often computationally intensive and therefore less time-consuming for discrete data. Hence, continuous features are often transformed into discrete features. In the first paper, a discretization algorithm suitable for high-dimensional data is suggested and compared with other discretization approaches. Further, the effect of discretization on misclassification probability in high-dimensional setting is evaluated.   Linear classifiers are more stable which motivate adjusting the linear discriminant procedure to high-dimensional setting. In the second paper, a two-stage estimation procedure of the inverse covariance matrix, applying Lasso-based regularization and Cuthill-McKee ordering is suggested. The estimation gives a block-diagonal approximation of the covariance matrix which in turn leads to an additive classifier. In the third paper, an asymptotic framework that represents sparse and weak block models is derived and a technique for block-wise feature selection is proposed.      Probabilistic classifiers have the advantage of providing the probability of membership in each class for new observations rather than simply assigning to a class. In the fourth paper, a method is developed for constructing a Bayesian predictive classifier. Given the block-diagonal covariance matrix, the resulting Bayesian predictive and marginal classifier provides an efficient solution to the high-dimensional problem by splitting it into smaller tractable problems. The relevance and benefits of the proposed methods are illustrated using both simulated and real data. / Med dagens teknik, till exempel spektrometer och genchips, alstras data i stora mängder. Detta överflöd av data är inte bara till fördel utan orsakar även vissa problem, vanligtvis är antalet variabler (p) betydligt fler än antalet observation (n). Detta ger så kallat högdimensionella data vilket kräver nya statistiska metoder, då de traditionella metoderna är utvecklade för den omvända situationen (p<n).  Dessutom är det vanligtvis väldigt få av alla dessa variabler som är relevanta för något givet projekt och styrkan på informationen hos de relevanta variablerna är ofta svag. Därav brukar denna typ av data benämnas som gles och svag (sparse and weak). Vanligtvis brukar identifiering av de relevanta variablerna liknas vid att hitta en nål i en höstack. Denna avhandling tar upp tre olika sätt att klassificera i denna typ av högdimensionella data.  Där klassificera innebär, att genom ha tillgång till ett dataset med både förklaringsvariabler och en utfallsvariabel, lära en funktion eller algoritm hur den skall kunna förutspå utfallsvariabeln baserat på endast förklaringsvariablerna. Den typ av riktiga data som används i avhandlingen är microarrays, det är cellprov som visar aktivitet hos generna i cellen. Målet med klassificeringen är att med hjälp av variationen i aktivitet hos de tusentals gener (förklaringsvariablerna) avgöra huruvida cellprovet kommer från cancervävnad eller normalvävnad (utfallsvariabeln). Det finns klassificeringsmetoder som kan hantera högdimensionella data men dessa är ofta beräkningsintensiva, därav fungera de ofta bättre för diskreta data. Genom att transformera kontinuerliga variabler till diskreta (diskretisera) kan beräkningstiden reduceras och göra klassificeringen mer effektiv. I avhandlingen studeras huruvida av diskretisering påverkar klassificeringens prediceringsnoggrannhet och en mycket effektiv diskretiseringsmetod för högdimensionella data föreslås. Linjära klassificeringsmetoder har fördelen att vara stabila. Nackdelen är att de kräver en inverterbar kovariansmatris och vilket kovariansmatrisen inte är för högdimensionella data. I avhandlingen föreslås ett sätt att skatta inversen för glesa kovariansmatriser med blockdiagonalmatris. Denna matris har dessutom fördelen att det leder till additiv klassificering vilket möjliggör att välja hela block av relevanta variabler. I avhandlingen presenteras även en metod för att identifiera och välja ut blocken. Det finns också probabilistiska klassificeringsmetoder som har fördelen att ge sannolikheten att tillhöra vardera av de möjliga utfallen för en observation, inte som de flesta andra klassificeringsmetoder som bara predicerar utfallet. I avhandlingen förslås en sådan Bayesiansk metod, givet den blockdiagonala matrisen och normalfördelade utfallsklasser. De i avhandlingen förslagna metodernas relevans och fördelar är visade genom att tillämpa dem på simulerade och riktiga högdimensionella data.
214

Reverse Engineering of Temporal Gene Expression Data Using Dynamic Bayesian Networks And Evolutionary Search

Salehi, Maryam 17 September 2008 (has links)
Capturing the mechanism of gene regulation in a living cell is essential to predict the behavior of cell in response to intercellular or extra cellular factors. Such prediction capability can potentially lead to development of improved diagnostic tests and therapeutics [21]. Amongst reverse engineering approaches that aim to model gene regulation are Dynamic Bayesian Networks (DBNs). DBNs are of particular interest as these models are capable of discovering the causal relationships between genes while dealing with noisy gene expression data. At the same time, the problem of discovering the optimum DBN model, makes structure learning of DBN a challenging topic. This is mainly due to the high dimensionality of the search space of gene expression data that makes exhaustive search strategies for identifying the best DBN structure, not practical. In this work, for the first time the application of a covariance-based evolutionary search algorithm is proposed for structure learning of DBNs. In addition, the convergence time of the proposed algorithm is improved compared to the previously reported covariance-based evolutionary search approaches. This is achieved by keeping a fixed number of good sample solutions from previous iterations. Finally, the proposed approach, M-CMA-ES, unlike gradient-based methods has a high probability to converge to a global optimum. To assess how efficient this approach works, a temporal synthetic dataset is developed. The proposed approach is then applied to this dataset as well as Brainsim dataset, a well known simulated temporal gene expression data [58]. The results indicate that the proposed method is quite efficient in reconstructing the networks in both the synthetic and Brainsim datasets. Furthermore, it outperforms other algorithms in terms of both the predicted structure accuracy and the mean square error of the reconstructed time series of gene expression data. For validation purposes, the proposed approach is also applied to a biological dataset composed of 14 cell-cycle regulated genes in yeast Saccharomyces Cerevisiae. Considering the KEGG1 pathway as the target network, the efficiency of the proposed reverse engineering approach significantly improves on the results of two previous studies of yeast cell cycle data in terms of capturing the correct interactions. / Thesis (Master, Computing) -- Queen's University, 2008-09-09 11:35:33.312
215

Contributions to Estimation and Testing Block Covariance Structures in Multivariate Normal Models

Liang, Yuli January 2015 (has links)
This thesis concerns inference problems in balanced random effects models with a so-called block circular Toeplitz covariance structure. This class of covariance structures describes the dependency of some specific multivariate two-level data when both compound symmetry and circular symmetry appear simultaneously. We derive two covariance structures under two different invariance restrictions. The obtained covariance structures reflect both circularity and exchangeability present in the data. In particular, estimation in the balanced random effects with block circular covariance matrices is considered. The spectral properties of such patterned covariance matrices are provided. Maximum likelihood estimation is performed through the spectral decomposition of the patterned covariance matrices. Existence of the explicit maximum likelihood estimators is discussed and sufficient conditions for obtaining explicit and unique estimators for the variance-covariance components are derived. Different restricted models are discussed and the corresponding maximum likelihood estimators are presented. This thesis also deals with hypothesis testing of block covariance structures, especially block circular Toeplitz covariance matrices. We consider both so-called external tests and internal tests. In the external tests, various hypotheses about testing block covariance structures, as well as mean structures, are considered, and the internal tests are concerned with testing specific covariance parameters given the block circular Toeplitz structure. Likelihood ratio tests are constructed, and the null distributions of the corresponding test statistics are derived.
216

Multivariate Spatial Process Gradients with Environmental Applications

Terres, Maria Antonia January 2014 (has links)
<p>Previous papers have elaborated formal gradient analysis for spatial processes, focusing on the distribution theory for directional derivatives associated with a response variable assumed to follow a Gaussian process model. In the current work, these ideas are extended to additionally accommodate one or more continuous covariate(s) whose directional derivatives are of interest and to relate the behavior of the directional derivatives of the response surface to those of the covariate surface(s). It is of interest to assess whether, in some sense, the gradients of the response follow those of the explanatory variable(s), thereby gaining insight into the local relationships between the variables. The joint Gaussian structure of the spatial random effects and associated directional derivatives allows for explicit distribution theory and, hence, kriging across the spatial region using multivariate normal theory. The gradient analysis is illustrated for bivariate and multivariate spatial models, non-Gaussian responses such as presence-absence and point patterns, and outlined for several additional spatial modeling frameworks that commonly arise in the literature. Working within a hierarchical modeling framework, posterior samples enable all gradient analyses to occur as post model fitting procedures.</p> / Dissertation
217

集団ごとに収集された個人データの分析 - 多変量回帰分析とMCA(Multilevel covariance structuree analysis)の比較 -

尾関, 美喜, OZEKI, Miki 20 April 2006 (has links)
国立情報学研究所で電子化したコンテンツを使用している。
218

Contributions to statistical learning and statistical quantification in nanomaterials

Deng, Xinwei 22 June 2009 (has links)
This research focuses to develop some new techniques on statistical learning including methodology, computation and application. We also developed statistical quantification in nanomaterials. For a large number of random variables with temporal or spatial structures, we proposed shrink estimates of covariance matrix to account their Markov structures. The proposed method exploits the sparsity in the inverse covariance matrix in a systematic fashion. To deal with high dimensional data, we proposed a robust kernel principal component analysis for dimension reduction, which can extract the nonlinear structure of high dimension data more robustly. To build a prediction model more efficiently, we developed an active learning via sequential design to actively select the data points into the training set. By combining the stochastic approximation and D-optimal designs, the proposed method can build model with minimal time and effort. We also proposed factor logit-models with a large number of categories for classification. We show that the convergence rate of the classifier functions estimated from the proposed factor model does not rely on the number of categories, but only on the number of factors. It therefore can achieve better classification accuracy. For the statistical nano-quantification, a statistical approach is presented to quantify the elastic deformation of nanomaterials. We proposed a new statistical modeling technique, called sequential profile adjustment by regression (SPAR), to account for and eliminate the various experimental errors and artifacts. SPAR can automatically detect and remove the systematic errors and therefore gives more precise estimation of the elastic modulus.
219

Receding Horizon Covariance Control

Wendel, Eric 2012 August 1900 (has links)
Covariance assignment theory, introduced in the late 1980s, provided the only means to directly control the steady-state error properties of a linear system subject to Gaussian white noise and parameter uncertainty. This theory, however, does not extend to control of the transient uncertainties and to date there exist no practical engineering solutions to the problem of directly and optimally controlling the uncertainty in a linear system from one Gaussian distribution to another. In this thesis I design a dual-mode Receding Horizon Controller (RHC) that takes a controllable, deterministic linear system from an arbitrary initial covariance to near a desired stationary covariance in finite time. The RHC solves a sequence of free-time Optimal Control Problems (OCP) that directly controls the fundamental solution matrices of the linear system; each problem is a right-invariant OCP on the matrix Lie group GLn of invertible matrices. A terminal constraint ensures that each OCP takes the system to the desired covariance. I show that, by reducing the Hamiltonian system of each OCP from T?GLn to gln? x GLn, the transversality condition corresponding to the terminal constraint simplifies the two-point Boundary Value Problem (BVP) to a single unknown in the initial or final value of the costate in gln?. These results are applied in the design of a dual-mode RHC. The first mode repeatedly solves the OCPs until the optimal time for the system to reach the de- sired covariance is less than the RHC update time. This triggers the second mode, which applies covariance assignment theory to stabilize the system near the desired covariance. The dual-mode controller is illustrated on a planar system. The BVPs are solved using an indirect shooting method that numerically integrates the fundamental solutions on R4 using an adaptive Runge-Kutta method. I contend that extension of the results of this thesis to higher-dimensional systems using either in- direct or direct methods will require numerical integrators that account for the Lie group structure. I conclude with some remarks on the possible extension of a classic result called Lie?s method of reduction to receding horizon control.
220

Modification of the least-squares collocation method for non-stationary gravity field modelling

Darbeheshti, Neda January 2009 (has links)
Geodesy deals with the accurate analysis of spatial and temporal variations in the geometry and physics of the Earth at local and global scales. In geodesy, least-squares collocation (LSC) is a bridge between the physical and statistical understanding of different functionals of the gravitational field of the Earth. This thesis specifically focuses on the [incorrect] implicit LSC assumptions of isotropy and homogeneity that create limitations on the application of LSC in non-stationary gravity field modeling. In particular, the work seeks to derive expressions for local and global analytical covariance functions that account for the anisotropy and heterogeneity of the Earth's gravity field. / Standard LSC assumes 2D stationarity and 3D isotropy, and relies on a covariance function to account for spatial dependence in the observed data. However, the assumption that the spatial dependence is constant throughout the region of interest may sometimes be violated. Assuming a stationary covariance structure can result in over-smoothing, e.g., of the gravity field in mountains and under-smoothing in great plains. The kernel convolution method from spatial statistics is introduced for non-stationary covariance structures, and its advantage in dealing with non-stationarity in geodetic data is demonstrated. / Tests of the new non-stationary solutions were performed over the Darling Fault, Western Australia, where the anomalous gravity field is anisotropic and non-stationary. Stationary and non-stationary covariance functions are compared in 2D LSC to the empirical example of gravity anomaly interpolation. The results with non-stationary covariance functions are better than standard LSC in terms of formal errors and cross-validation. Both non-stationarity of mean and covariance are considered in planar geoid determination by LSC to test how differently non-stationarity of mean and covariance affects the LSC result compared with GPS-levelling points in this area. Non-stationarity of the mean was not very considerable in this case, but non-stationary covariances were very effective when optimising the gravimetric quasigeoid to agree with the geometric quasigeoid. / In addition, the importance of the choice of the parameters of the non-stationary covariance functions within a Bayesian framework and the improvement of the new method for different functionals on the globe are pointed out.

Page generated in 0.0733 seconds