• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 30
  • 17
  • 5
  • 4
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 69
  • 13
  • 11
  • 11
  • 9
  • 8
  • 8
  • 7
  • 7
  • 7
  • 7
  • 6
  • 6
  • 6
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Sufficient Dimension Reduction with Missing Data

XIA, QI January 2017 (has links)
Existing sufficient dimension reduction (SDR) methods typically consider cases with no missing data. The dissertation aims to propose methods to facilitate the SDR methods when the response can be missing. The first part of the dissertation focuses on the seminal sliced inverse regression (SIR) approach proposed by Li (1991). We show that missing responses generally affect the validity of the inverse regressions under the mechanism of missing at random. We then propose a simple and effective adjustment with inverse probability weighting that guarantees the validity of SIR. Furthermore, a marginal coordinate test is introduced for this adjusted estimator. The proposed method share the simplicity of SIR and requires the linear conditional mean assumption. The second part of the dissertation proposes two new estimating equation procedures: the complete case estimating equation approach and the inverse probability weighted estimating equation approach. The two approaches are applied to a family of dimension reduction methods, which includes ordinary least squares, principal Hessian directions, and SIR. By solving the estimating equations, the two approaches are able to avoid the common assumptions in the SDR literature, the linear conditional mean assumption, and the constant conditional variance assumption. For all the aforementioned methods, the asymptotic properties are established, and their superb finite sample performances are demonstrated through extensive numerical studies as well as a real data analysis. In addition, existing estimators of the central mean space have uneven performances across different types of link functions. To address this limitation, a new hybrid SDR estimator is proposed that successfully recovers the central mean space for a wide range of link functions. Based on the new hybrid estimator, we further study the order determination procedure and the marginal coordinate test. The superior performance of the hybrid estimator over existing methods is demonstrated in simulation studies. Note that the proposed procedures dealing with the missing response at random can be simply adapted to this hybrid method. / Statistics
32

Optimization Techniques Exploiting Problem Structure: Applications to Aerodynamic Design

Shenoy, Ajit R. 11 April 1997 (has links)
The research presented in this dissertation investigates the use of all-at-once methods applied to aerodynamic design. All-at-once schemes are usually based on the assumption of sufficient continuity in the constraints and objectives, and this assumption can be troublesome in the presence of shock discontinuities. Special treatment has to be considered for such problems and we study several approaches. Our all-at-once methods are based on the Sequential Quadratic Programming method, and are designed to exploit the structure inherent in a given problem. The first method is a Reduced Hessian formulation which projects the optimization problem to a lower dimension design space. The second method exploits the sparse structure in a given problem which can yield significant savings in terms of computational effort as well as storage requirements. An underlying theme in all our applications is that careful analysis of the given problem can often lead to an efficient implementation of these all-at-once methods. Chapter 2 describes a nozzle design problem involving one-dimensional transonic flow. An initial formulation as an optimal control problem allows us to solve the problem as as two-point boundary problem which provides useful insight into the nature of the problem. Using the Reduced Hessian formulation for this problem, we find that a conventional CFD method based on shock capturing produces poor performance. The numerical difficulties caused by the presence of the shock can be alleviated by reformulating the constraints so that the shock can be treated explicitly. This amounts to using a shock fitting technique. In Chapter 3, we study variants of a simplified temperature control problem. The control problem is solved using a sparse SQP scheme. We show that for problems where the underlying infinite-dimensional problem is well-posed, the optimizer performs well, whereas it fails to produce good results for problems where the underlying infinite-dimensional problem is ill-posed. A transonic airfoil design problem is studied in Chapter 4, using the Reduced SQP formulation. We propose a scheme for performing the optimization subtasks that is based on an Euler Implicit time integration scheme. The motivation is to preserve the solution-finding structure used in the analysis algorithm. Preliminary results obtained using this method are promising. Numerical results have been presented for all the problems described. / Ph. D.
33

Graphs and pairings of elliptic curves

Mula, Marzio 22 February 2024 (has links)
Most isogeny-based cryptosystems ultimately rely, for their security, on l- IsoPath, i.e. the problem of finding a secret l-smooth isogeny between two elliptic curves. As cryptographic applications usually employ weaker variants of l-IsoPath for practical reasons, it is natural to ask whether these variants are equally hard from a computational perspective. For example, what happens if the endomorphism ring of one of the curves is known? Does the existence of suitable pairings affect the hardness of l-IsoPath? What happens if some non-trivial endomorphisms of the domain and codomain curves are known? These kinds of questions lead to different problems, some of which are considered throughout this thesis. To prevent anyone from knowing the endomorphism ring of a supersingular elliptic curve, we would need a method to hash in the supersingular isogeny graph, i.e. the graph whose vertices are supersingular elliptic curves (up to isomorphism) and whose edges are isogenies of fixed degree. We give examples of cryptographic protocols that could benefit from this and survey some known methods. Since none of them is at the same time efficient and cryptographically secure, we also point out a few alternative approaches. Later on, we leverage the classic Deuring correspondence between supersingular elliptic curves and quaternion orders to study a weaker version of l-IsoPath, inspired by the study of CM theory from the previous part. We then focus on the construction of pairings of elliptic curves, showing that, in the general case, finding distinct pairings compatible with a secret isogeny is no easier than retrieving the isogeny itself. In the presence of an orientation, on the other hand, we show that the existence of suitable self-pairings, together with a recent attack on the isogeny-based key-exchange SIDH, does lead to efficiently solving l- IsoPath for some class-group-action-based protocols. In particular, we completely characterize the cases in which these selfpairings exist. Finally, we introduce a different graph of elliptic curves, which has not been considered before in isogeny-based cryptography and which does not arise, in fact, from isogenies: the Hessian graph. We give a (still partial) account of its remarkable regularity and discuss potential cryptographic applications.
34

Novel higher order regularisation methods for image reconstruction

Papafitsoros, Konstantinos January 2015 (has links)
In this thesis we study novel higher order total variation-based variational methods for digital image reconstruction. These methods are formulated in the context of Tikhonov regularisation. We focus on regularisation techniques in which the regulariser incorporates second order derivatives or a sophisticated combination of first and second order derivatives. The introduction of higher order derivatives in the regularisation process has been shown to be an advantage over the classical first order case, i.e., total variation regularisation, as classical artifacts such as the staircasing effect are significantly reduced or totally eliminated. Also in image inpainting the introduction of higher order derivatives in the regulariser turns out to be crucial to achieve interpolation across large gaps. First, we introduce, analyse and implement a combined first and second order regularisation method with applications in image denoising, deblurring and inpainting. The method, numerically realised by the split Bregman algorithm, is computationally efficient and capable of giving comparable results with total generalised variation (TGV), a state of the art higher order method. An additional experimental analysis is performed for image inpainting and an online demo is provided on the IPOL website (Image Processing Online). We also compute and study properties of exact solutions of the one dimensional total generalised variation problem with L^{2} data fitting term, for simple piecewise affine data functions, with or without jumps . This gives an insight on how this type of regularisation behaves and unravels the role of the TGV parameters. Finally, we introduce, study and analyse a novel non-local Hessian functional. We prove localisations of the non-local Hessian to the local analogue in several topologies and our analysis results in derivative-free characterisations of higher order Sobolev and BV spaces. An alternative formulation of a non-local Hessian functional is also introduced which is able to produce piecewise affine reconstructions in image denoising, outperforming TGV.
35

Fast Methods for Bimolecular Charge Optimization

Bardhan, Jaydeep P., Lee, J.H., Kuo, Shihhsien, Altman, Michael D., Tidor, Bruce, White, Jacob K. 01 1900 (has links)
We report a Hessian-implicit optimization method to quickly solve the charge optimization problem over protein molecules: given a ligand and its complex with a receptor, determine the ligand charge distribution that minimizes the electrostatic free energy of binding. The new optimization couples boundary element method (BEM) and primal-dual interior point method (PDIPM); initial results suggest that the method scales much better than the previous methods. The quadratic objective function is the electrostatic free energy of binding where the Hessian matrix serves as an operator that maps the charge to the potential. The unknowns are the charge values at the charge points, and they are limited by equality and inequality constraints that model physical considerations, i.e. conservation of charge. In the previous approaches, finite-difference method is used to model the Hessian matrix, which requires significant computational effort to remove grid-based inaccuracies. In the novel approach, BEM is used instead, with precorrected FFT (pFFT) acceleration to compute the potential induced by the charges. This part will be explained in detail by Shihhsien Kuo in another talk. Even though the Hessian matrix can be calculated an order faster than the previous approaches, still it is quite expensive to find it explicitly. Instead, the KKT condition is solved by a PDIPM, and a Krylov based iterative solver is used to find the Newton direction at each step. Hence, only Hessian times a vector is necessary, which can be evaluated quickly using pFFT. The new method with proper preconditioning solves a 500 variable problem nearly 10 times faster than the techniques that must find a Hessian matrix explicitly. Furthermore, the algorithm scales nicely due to the robustness in number of IPM iterations to the size of the problem. The significant reduction in cost allows the analysis of much larger molecular system than those could be solved in a reasonable time using the previous methods. / Singapore-MIT Alliance (SMA)
36

Information geometries in black hole physics

Pidokrajt, Narit January 2009 (has links)
In this thesis we aim to develop new perspectives on the statistical mechanics of black holes using an information geometric approach (Ruppeiner and Weinhold geometry). The Ruppeiner metric is defined as a Hessian matrix on a Gibbs surface, and provides a geometric description of thermodynamic systems in equilibrium. This Ruppeiner geometry exhibits physically suggestive features; a flat Ruppeiner metric for systems with no interactions i.e. the ideal gas, and curvature singularities signaling critical behavior(s) of the system. We construct a flatness theorem based on the scaling property of the black holes, which proves to be useful in many cases. Another thermodynamic geometry known as the Weinhold geometry is defined as the Hessian of internal energy and is conformally related to the Ruppeiner metric with the system’s temperature as a conformal factor.  We investigate a number of black hole families in various gravity theories. Our findings are briefly summarized as follows: the Reissner-Nordström type, the Einstein-Maxwell-dilaton andBTZ black holes have flat Ruppeiner metrics that can be represented by a unique state space diagram. We conjecture that the state space diagram encodes extremality properties of the black hole solution. The Kerr type black holes have curved Ruppeiner metrics whose curvature singularities are meaningful in five dimensions and higher, signifying the onset of thermodynamic instabilities of the black hole in higher dimensions. All the three-parameter black hole families in our study have non-flat Ruppeiner and Weinhold metrics and their associated curvature singularities occur in the extremal limits. We also study two-dimensional black hole families whose thermodynamic geometries are dependent on parameters that determine the thermodynamics of the black hole in question. The tidal charged black hole which arises in the braneworld gravity is studied. Despite its similarity to the Reissner-Nordström type, its thermodynamic geometries are distinctive. / At the time of the doctoral defense, the following papers were unpublished and had a status as follows: Paper 2: Submitted. / Geometry and Physics
37

Hessian-based response surface approximations for uncertainty quantification in large-scale statistical inverse problems, with applications to groundwater flow

Flath, Hannah Pearl 11 September 2013 (has links)
Subsurface flow phenomena characterize many important societal issues in energy and the environment. A key feature of these problems is that subsurface properties are uncertain, due to the sparsity of direct observations of the subsurface. The Bayesian formulation of this inverse problem provides a systematic framework for inferring uncertainty in the properties given uncertainties in the data, the forward model, and prior knowledge of the properties. We address the problem: given noisy measurements of the head, the pdf describing the noise, prior information in the form of a pdf of the hydraulic conductivity, and a groundwater flow model relating the head to the hydraulic conductivity, find the posterior probability density function (pdf) of the parameters describing the hydraulic conductivity field. Unfortunately, conventional sampling of this pdf to compute statistical moments is intractable for problems governed by large-scale forward models and high-dimensional parameter spaces. We construct a Gaussian process surrogate of the posterior pdf based on Bayesian interpolation between a set of "training" points. We employ a greedy algorithm to find the training points by solving a sequence of optimization problems where each new training point is placed at the maximizer of the error in the approximation. Scalable Newton optimization methods solve this "optimal" training point problem. We tailor the Gaussian process surrogate to the curvature of the underlying posterior pdf according to the Hessian of the log posterior at a subset of training points, made computationally tractable by a low-rank approximation of the data misfit Hessian. A Gaussian mixture approximation of the posterior is extracted from the Gaussian process surrogate, and used as a proposal in a Markov chain Monte Carlo method for sampling both the surrogate as well as the true posterior. The Gaussian process surrogate is used as a first stage approximation in a two-stage delayed acceptance MCMC method. We provide evidence for the viability of the low-rank approximation of the Hessian through numerical experiments on a large scale atmospheric contaminant transport problem and analysis of an infinite dimensional model problem. We provide similar results for our groundwater problem. We then present results from the proposed MCMC algorithms. / text
38

Feature-based matching in historic repeat photography: an evaluation and assessment of feasibility.

Gat, Christopher 16 August 2011 (has links)
This study reports on the quantitative evaluation of a set of state-of-the-art feature detectors and descriptors in the context of repeat photography. Unlike most related work, the proposed study assesses the performance of feature detectors when intra-pair variations are uncontrolled and due to a variety of factors (landscape change, weather conditions, different acquisition sensors). There is no systematic way to model the factors inducing image change. The proposed evaluation is performed in the context of image matching, i.e. in conjunction with a descriptor and matching strategy. Thus, beyond just comparing the performance of these detectors and descriptors, we also examine the feasibility of feature-based matching on repeat photography. Our dataset consists of a set of repeat and historic images pairs that are representative for the database created by the Mountain Legacy Project www.mountainlegacy.ca. / Graduate
39

Derivace v aplikačních úlohách - sbírka řešených příkladů. / Derivative in apllied problems - a digest of solved examples.

SEKAL, Tomáš January 2016 (has links)
The theme of this diploma thesis is to create a collection of exercises on the differentiation in application tasks. It focuses primarily on tasks of everyday situations, physical problems and problems from technical disciplines. Exercises are sorted from easy to advanced ones. For each example there is a solution procedure provided and illustrated with sketches of given situations created in majority with Google SketchUp, graphs of functions created in GeoGebra, eventually 3D graphs of each function created using mathematical program Maple. In the very introduction of this theses there is theoretical base and "first aid" provided in the form of instructions on solving this kind of exercises.
40

Small Blob Detection in Medical Images

January 2015 (has links)
abstract: Recent advances in medical imaging technology have greatly enhanced imaging based diagnosis which requires computational effective and accurate algorithms to process the images (e.g., measure the objects) for quantitative assessment. In this dissertation, one type of imaging objects is of interest: small blobs. Example small blob objects are cells in histopathology images, small breast lesions in ultrasound images, glomeruli in kidney MR images etc. This problem is particularly challenging because the small blobs often have inhomogeneous intensity distribution and indistinct boundary against the background. This research develops a generalized four-phased system for small blob detections. The system includes (1) raw image transformation, (2) Hessian pre-segmentation, (3) feature extraction and (4) unsupervised clustering for post-pruning. First, detecting blobs from 2D images is studied where a Hessian-based Laplacian of Gaussian (HLoG) detector is proposed. Using the scale space theory as foundation, the image is smoothed via LoG. Hessian analysis is then launched to identify the single optimal scale based on which a pre-segmentation is conducted. Novel Regional features are extracted from pre-segmented blob candidates and fed to Variational Bayesian Gaussian Mixture Models (VBGMM) for post pruning. Sixteen cell histology images and two hundred cell fluorescent images are tested to demonstrate the performances of HLoG. Next, as an extension, Hessian-based Difference of Gaussians (HDoG) is proposed which is capable to identify the small blobs from 3D images. Specifically, kidney glomeruli segmentation from 3D MRI (6 rats, 3 humans) is investigated. The experimental results show that HDoG has the potential to automatically detect glomeruli, enabling new measurements of renal microstructures and pathology in preclinical and clinical studies. Realizing the computation time is a key factor impacting the clinical adoption, the last phase of this research is to investigate the data reduction technique for VBGMM in HDoG to handle large-scale datasets. A new coreset algorithm is developed for variational Bayesian mixture models. Using the same MRI dataset, it is observed that the four-phased system with coreset-VBGMM has similar performance as using the full dataset but about 20 times faster. / Dissertation/Thesis / Doctoral Dissertation Industrial Engineering 2015

Page generated in 0.0371 seconds