• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 122
  • 66
  • 28
  • 12
  • 10
  • 9
  • 5
  • 5
  • 4
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • Tagged with
  • 320
  • 63
  • 54
  • 39
  • 38
  • 38
  • 36
  • 30
  • 29
  • 28
  • 26
  • 23
  • 22
  • 21
  • 21
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
81

Absolute depth using low-cost light field cameras

Rangappa, Shreedhar January 2018 (has links)
Digital cameras are increasingly used for measurement tasks within engineering scenarios, often being part of metrology platforms. Existing cameras are well equipped to provide 2D information about the fields of view (FOV) they observe, the objects within the FOV, and the accompanying environments. But for some applications these 2D results are not sufficient, specifically applications that require Z dimensional data (depth data) along with the X and Y dimensional data. New designs of camera systems have previously been developed by integrating multiple cameras to provide 3D data, ranging from 2 camera photogrammetry to multiple camera stereo systems. Many earlier attempts to record 3D data on 2D sensors have been completed, and likewise many research groups around the world are currently working on camera technology but from different perspectives; computer vision, algorithm development, metrology, etc. Plenoptic or Lightfield camera technology was defined as a technique over 100 years ago but has remained dormant as a potential metrology instrument. Lightfield cameras utilize an additional Micro Lens Array (MLA) in front of the imaging sensor, to create multiple viewpoints of the same scene and allow encoding of depth information. A small number of companies have explored the potential of lightfield cameras, but in the majority, these have been aimed at domestic consumer photography, only ever recording scenes as relative scale greyscale images. This research considers the potential for lightfield cameras to be used for world scene metrology applications, specifically to record absolute coordinate data. Specific interest has been paid to a range of low cost lightfield cameras to; understand the functional/behavioural characteristics of the optics, identify potential need for optical and/or algorithm development, define sensitivity, repeatability and accuracy characteristics and limiting thresholds of use, and allow quantified 3D absolute scale coordinate data to be extracted from the images. The novel output of this work is; an analysis of lightfield camera system sensitivity leading to the definition of Active Zones (linear data generation good data) and In-active Zones (non-linear data generation poor data), development of bespoke calibration algorithms that remove radial/tangential distortion from the data captured using any MLA based camera, and, a light field camera independent algorithm that allows the delivery of 3D coordinate data in absolute units within a well-defined measurable range from a given camera.
82

Regularized methods for high-dimensional and bi-level variable selection

Breheny, Patrick John 01 July 2009 (has links)
Many traditional approaches cease to be useful when the number of variables is large in comparison with the sample size. Penalized regression methods have proved to be an attractive approach, both theoretically and empirically, for dealing with these problems. This thesis focuses on the development of penalized regression methods for high-dimensional variable selection. The first part of this thesis deals with problems in which the covariates possess a grouping structure that can be incorporated into the analysis to select important groups as well as important members of those groups. I introduce a framework for grouped penalization that encompasses the previously proposed group lasso and group bridge methods, sheds light on the behavior of grouped penalties, and motivates the proposal of a new method, group MCP. The second part of this thesis develops fast algorithms for fitting models with complicated penalty functions such as grouped penalization methods. These algorithms combine the idea of local approximation of penalty functions with recent research into coordinate descent algorithms to produce highly efficient numerical methods for fitting models with complicated penalties. Importantly, I show these algorithms to be both stable and linear in the dimension of the feature space, allowing them to be efficiently scaled up to very large problems. In the third part of this thesis, I extend the idea of false discovery rates to penalized regression. The Karush-Kuhn-Tucker conditions describing penalized regression estimates provide testable hypotheses involving partial residuals. I use these hypotheses to connect the previously disparate elds of multiple comparisons and penalized regression, develop estimators for the false discovery rates of methods such as the lasso and elastic net, and establish theoretical results. Finally, the methods from all three sections are studied in a number of simulations and applied to real data from gene expression and genetic association studies.
83

Vessiot: A Maple Package for Varational and Tensor Calculus in Multiple Coordinate Frames

Miller, Charles E. 01 May 1999 (has links)
The Maple V package Vessiot is an extensive set of procedures for performing computations in variational and tensor calculus. Vessiot is an extension of a previous package, Helmholtz, which was written by Cinnamon Hillyard for performing operations in the calculus of variations. The original set of commands included standard operators on differential forms, Euler-Lagrange operators, the Lie bracket operator, Lie derivatives, and homotopy operators. These capabilities are preserved in Vessiot, and enhanced so as to function in a multiple coordinate frame context. In addition, a substantial number of general tensor operations have been added to the package. These include standard algebraic operations such as the tensor product, contraction, raising and lowering of indices, as well covariant and Lie differentiation. Objects such as connections, the Riemannian curvature tensor, and Ricci tensor and scalar may also be easily computed. A synopsis of the command syntax appears in Appendix A on pages 194 through 225, and a complete listing of the Maple procedural code is given in Appendix B, beginning on page 222.
84

Modèles de Hubbard unidimensionels généralisés

Fomin, V. 20 September 2010 (has links) (PDF)
Cette thèse est consacrée à l'étude du modèle de Hubbard unidimensionnel et à ses généralisa- tions. Le modèle de Hubbard est un modèle fondamental de la physique de la matière condensée, décrivant des électrons en interaction sur un réseau. Il a une très riche structure physique. Malgré la simplicité de sa construction, le modèle a été appliqué dans différents problèmes comme la supra- conductivité à haute température, le magnétisme et la transition métal-isolant. A une dimension, le modèle de Hubbard est un modèle intégrable très étudié qui a servi de 'laboratoire' pour la physique de la matière condensée. Récemment, les systèmes intégrables quantiques d'une facon générale, et le modèle de Hubbard en particulier, sont apparus d'une manière surprenante dans le contexte de la correspondance AdS/CFT. Le point de contact entre ces domaines est les équations de Bethe : celles de nouveaux modèles intégrables et de modèles existants généralisés sont à priori significatifs dans l'application en dualité AdS/CFT. Dans la premiere partie de la thèse, les notions de base sur l'intégrabilité quantique sont présen- tées : formalisme de la matrice R, équation de Yang-Baxter, chaînes de spin intégrables. Dans la deuxième partie, certaines résultats fondamentaux concernant le modèle de Hubbard sont passés en revue : la solution par l'Ansatz de Bethe coordonnée, les solutions réelles des équations de Lieb-Wu etc. De plus, l'application dans la correspondance AdS/CFT est considérée. Cependant, on trouve que certaines modifications du modèle de Hubbard sont nécessaires pour reproduire les résultats de cette correspondance. Cela est une des motivations principales d'étude de modèles de Hubbard généralisés. La quatrième partie est consacrée aux généralisations du modèle de Hubbard, en se con- centrant sur les cas supersymétriques. La chapitre cinq expose les résultats obtenus dans le cadre de cette thèse sur les modèles de Hubbard généralisés, en particulier, l'Ansatz de Bethe coordonnée ainsi que les solutions réelles des équations de Bethe obtenues dans la limite thermodynamique. Les équations de Bethe obtenues sont différentes de celle de Lieb et Wu par des phases dont la manifesta- tion est un signe encourageant pour l'application en AdS/CFT contexte. Les applications possibles, notamment dans le domaine de la physique de la matière condensée, sont également considérées.
85

Use of Simulation Optimization for Clearance of Flight Control Laws

Fredman, Kristin, Freiholtz, Anna January 2006 (has links)
<p>Before a new flight control system is released for flight, a huge number of simulations are evaluated to find weaknesses of the system. This process is called flight clearance. Flight clearance is a very important but time consuming process. There is a need of better flight clearance methods and one of the most promising methods is the use of optimization. In this thesis the flight clearance of a simulation model of JAS 39 Gripen is examined. Two flight clearance algorithms using two different optimization methods are evaluated and compared to each other and to a traditional flight clearance method.</p><p>In this thesis the flight clearance process is separated into three cases: search for the worst flight condition, search for the worst manoeuvre and search for the worst flight condition including parameter uncertainties. For all cases the optimization algorithms find a more dangerous case than the traditional method. In the search for worst flight condition, both with and without uncertainties, the optimization algorithms are to prefer to the traditional method with respect to the clearance results and the number of objective function calls. The search for the worst manoeuvre is a much more complex problem. Even as the algorithms find more dangerous manoeuvres than the traditional method, it is not certain that they find the worst manoeuvres. If not other methods should be used the problem has to be rephrased. For example other optimization variables or a few linearizations of the optimization problem could reduce the complexity.</p><p>The overall impression is that the need of information and problem characteristics define which method that is most suitable to use. The information required must be weighed against the cost of objective function calls. Compared to the traditional method, the optimization methods used in this thesis give extended information about the problems examined and are better to locate the worst case.</p>
86

Organizing a Global Coordinate System from Local Information on an Amorphous Computer

Nagpal, Radhika 29 August 1999 (has links)
This paper demonstrates that it is possible to generate a reasonably accurate coordinate system on randomly distributed processors, using only local information and local communication. By coordinate systems we imply that each element assigns itself a logical coordinate that maps to its global physical location, starting with no apriori knowledge of position or orientation. The algorithm presented is inspired by biological systems that use chemical gradients to determine the position of cells. Extensive analysis and simulation results are presented. Two key results are: there is a critical minimum average neighborhood size of 15 for good accuracy and there is a fundamental limit on the resolution of any coordinate system determined strictly from local communication. We also demonstrate that using this algorithm, random distributions of processors produce significantly better accuracy than regular processor grids - such as those used by cellular automata. This has implications for discrete models of biology as well as for building smart sensor arrays.
87

Design and analysis of a three degrees of freedom (DOF) parallel manipulator with decoupled motions

Qian, Jijie 01 April 2009 (has links)
Parallel manipulators have been the subject of study of much robotic research during the past three decades. A parallel manipulator typically consists of a moving platform that is connected to a fixed base by at least two kinematic chains in parallel. Parallel manipulators can provide several attractive advantages over their serial counterpart in terms of high stiffness, high accuracy, and low inertia, which enable them to become viable alternatives for wide applications. But parallel manipulators also have some disadvantages, such as complex forward kinematics, small workspace, complicated structures, and a high cost. To overcome the above shortcomings, progress on the development of parallel manipulators with less than 6-DOF has been accelerated. However, most of presented parallel manipulators have coupled motion between the position and orientation of the end-effector. Therefore, the kinematic model is complex and the manipulator is difficult to control. Only recently, research on parallel manipulators with less than six degrees of freedom has been leaning toward the decoupling of the position and orientation of the end-effector, and this has really interested scientists in the area of parallel robotics. Kinematic decoupling for a parallel manipulator is that one motion of the up-platform only corresponds to input of one leg or one group of legs. And the input cannot produce other motions. Nevertheless, to date, the number of real applications of decoupled motion actuated parallel manipulators is still quite limited. This is partially because effective development strategies of such types of closed-loop structures are not so obvious. In addition, it is very difficult to design mechanisms with complete decoupling, but it is possible for fewer DOF parallel manipulators. To realize kinematic decoupling, the parallel manipulators are needed to possess special structures; therefore, investigating a parallel manipulator with decoupling motion remains a challenging task. This thesis deals with lower mobility parallel manipulator with decoupled motions. A novel parallel manipulator is proposed in this thesis. The manipulator consists of a moving platform that is connecting to a fixed base by three legs. Each leg is made of one C (cylinder), one R (revolute) and one U (universal) joints. The mobility of the manipulator and structure of the inactive joint are analyzed. Kinematics of the manipulator including inverse and forward kinematics, velocity equation, kinematic singularities, and stiffness are studied. The workspace of the parallel manipulator is examined. A design optimization is conducted with the prescribed workspace. It has been found that due to the special arrangement of the legs and joints, this parallel manipulator performs three translational degrees of freedom with decoupled motions, and is fully isotropic. This advantage has great potential for machine tools and Coordinate Measuring Machine (CMM). / UOIT
88

Statistical methods for function estimation and classification

Kim, Heeyoung 20 June 2011 (has links)
This thesis consists of three chapters. The first chapter focuses on adaptive smoothing splines for fitting functions with varying roughness. In the first part of the first chapter, we study an asymptotically optimal procedure to choose the value of a discretized version of the variable smoothing parameter in adaptive smoothing splines. With the choice given by the multivariate version of the generalized cross validation, the resulting adaptive smoothing spline estimator is shown to be consistent and asymptotically optimal under some general conditions. In the second part, we derive the asymptotically optimal local penalty function, which is subsequently used for the derivation of the locally optimal smoothing spline estimator. In the second chapter, we propose a Lipschitz regularity based statistical model, and apply it to coordinate measuring machine (CMM) data to estimate the form error of a manufactured product and to determine the optimal sampling positions of CMM measurements. Our proposed wavelet-based model takes advantage of the fact that the Lipschitz regularity holds for the CMM data. The third chapter focuses on the classification of functional data which are known to be well separable within a particular interval. We propose an interval based classifier. We first estimate a baseline of each class via convex optimization, and then identify an optimal interval that maximizes the difference among the baselines. Our interval based classifier is constructed based on the identified optimal interval. The derived classifier can be implemented via a low-order-of-complexity algorithm.
89

Use of Simulation Optimization for Clearance of Flight Control Laws

Fredman, Kristin, Freiholtz, Anna January 2006 (has links)
Before a new flight control system is released for flight, a huge number of simulations are evaluated to find weaknesses of the system. This process is called flight clearance. Flight clearance is a very important but time consuming process. There is a need of better flight clearance methods and one of the most promising methods is the use of optimization. In this thesis the flight clearance of a simulation model of JAS 39 Gripen is examined. Two flight clearance algorithms using two different optimization methods are evaluated and compared to each other and to a traditional flight clearance method. In this thesis the flight clearance process is separated into three cases: search for the worst flight condition, search for the worst manoeuvre and search for the worst flight condition including parameter uncertainties. For all cases the optimization algorithms find a more dangerous case than the traditional method. In the search for worst flight condition, both with and without uncertainties, the optimization algorithms are to prefer to the traditional method with respect to the clearance results and the number of objective function calls. The search for the worst manoeuvre is a much more complex problem. Even as the algorithms find more dangerous manoeuvres than the traditional method, it is not certain that they find the worst manoeuvres. If not other methods should be used the problem has to be rephrased. For example other optimization variables or a few linearizations of the optimization problem could reduce the complexity. The overall impression is that the need of information and problem characteristics define which method that is most suitable to use. The information required must be weighed against the cost of objective function calls. Compared to the traditional method, the optimization methods used in this thesis give extended information about the problems examined and are better to locate the worst case.
90

A Riemannian Geometric Mapping Technique for Identifying Incompressible Equivalents to Subsonic Potential Flows

German, Brian Joseph 05 April 2007 (has links)
This dissertation presents a technique for the solution of incompressible equivalents to planar steady subsonic potential flows. Riemannian geometric formalism is used to develop a gauge transformation of the length measure followed by a curvilinear coordinate transformation to map a subsonic flow into a canonical Laplacian flow with the same boundary conditions. The method represents the generalization of the methods of Prandtl-Glauert and Karman-Tsien and gives exact results in the sense that the inverse mapping produces the subsonic full potential solution over the original airfoil, up to numerical accuracy. The motivation for this research was provided by the analogy between linear potential flow and the special theory of relativity that emerges from the invariance of the wave equation under Lorentz transformations. Whereas elements of the special theory can be invoked for linear and global compressibility effects, the question posed in this work is whether other techniques from relativity theory could be used for effects that are nonlinear and local. This line of thought leads to a transformation leveraging Riemannian geometric methods common to the general theory of relativity. The dissertation presents the theory and a numerical method for practical solutions of equivalent incompressible flows over arbitrary profiles. The numerical method employs an iterative approach involving the solution of the incompressible flow with a panel method and the solution of the coordinate mapping to the canonical flow with a finite difference approach. This method is demonstrated for flow over a circular cylinder and over a NACA 0012 profile. Results are validated with subcritical full potential test cases available in the literature. Two areas of applicability of the method have been identified. The first is airfoil inverse design leveraging incompressible flow knowledge and empirical data for the potential field effects on boundary layer transition and separation. The second is aerodynamic testing using distorted models.

Page generated in 0.0527 seconds