• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 294
  • 171
  • 44
  • 32
  • 10
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 618
  • 143
  • 104
  • 93
  • 87
  • 78
  • 78
  • 70
  • 68
  • 68
  • 62
  • 61
  • 57
  • 53
  • 52
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
281

Graph-based Latent Embedding, Annotation and Representation Learning in Neural Networks for Semi-supervised and Unsupervised Settings

Kilinc, Ismail Ozsel 30 November 2017 (has links)
Machine learning has been immensely successful in supervised learning with outstanding examples in major industrial applications such as voice and image recognition. Following these developments, the most recent research has now begun to focus primarily on algorithms which can exploit very large sets of unlabeled examples to reduce the amount of manually labeled data required for existing models to perform well. In this dissertation, we propose graph-based latent embedding/annotation/representation learning techniques in neural networks tailored for semi-supervised and unsupervised learning problems. Specifically, we propose a novel regularization technique called Graph-based Activity Regularization (GAR) and a novel output layer modification called Auto-clustering Output Layer (ACOL) which can be used separately or collaboratively to develop scalable and efficient learning frameworks for semi-supervised and unsupervised settings. First, singularly using the GAR technique, we develop a framework providing an effective and scalable graph-based solution for semi-supervised settings in which there exists a large number of observations but a small subset with ground-truth labels. The proposed approach is natural for the classification framework on neural networks as it requires no additional task calculating the reconstruction error (as in autoencoder based methods) or implementing zero-sum game mechanism (as in adversarial training based methods). We demonstrate that GAR effectively and accurately propagates the available labels to unlabeled examples. Our results show comparable performance with state-of-the-art generative approaches for this setting using an easier-to-train framework. Second, we explore a different type of semi-supervised setting where a coarse level of labeling is available for all the observations but the model has to learn a fine, deeper level of latent annotations for each one. Problems in this setting are likely to be encountered in many domains such as text categorization, protein function prediction, image classification as well as in exploratory scientific studies such as medical and genomics research. We consider this setting as simultaneously performed supervised classification (per the available coarse labels) and unsupervised clustering (within each one of the coarse labels) and propose a novel framework combining GAR with ACOL, which enables the network to perform concurrent classification and clustering. We demonstrate how the coarse label supervision impacts performance and the classification task actually helps propagate useful clustering information between sub-classes. Comparative tests on the most popular image datasets rigorously demonstrate the effectiveness and competitiveness of the proposed approach. The third and final setup builds on the prior framework to unlock fully unsupervised learning where we propose to substitute real, yet unavailable, parent- class information with pseudo class labels. In this novel unsupervised clustering approach the network can exploit hidden information indirectly introduced through a pseudo classification objective. We train an ACOL network through this pseudo supervision together with unsupervised objective based on GAR and ultimately obtain a k-means friendly latent representation. Furthermore, we demonstrate how the chosen transformation type impacts performance and helps propagate the latent information that is useful in revealing unknown clusters. Our results show state-of-the-art performance for unsupervised clustering tasks on MNIST, SVHN and USPS datasets with the highest accuracies reported to date in the literature.
282

Characterization of Hydrogeological Media Using Electromagnetic Geophysics

Linde, Niklas January 2005 (has links)
Radio magnetotellurics (RMT), crosshole ground penetrating radar (GPR), and crosshole electrical resistance tomography (ERT) were applied in a range of hydrogeological applications where geophysical data could improve hydrogeological characterization. A profile of RMT data collected over highly resistive granite was used to map subhorizontal fracture zones below 300m depth, as well as a steeply dipping fracture zone, which was also observed on a coinciding seismic reflection profile. One-dimensional inverse modelling and 3D forward modelling with displacement currents included were necessary to test the reliability of features found in the 2D models, where the forward models did not include displacement currents and only lower frequencies were considered. An inversion code for RMT data was developed and applied to RMT data with azimuthal electrical anisotropy signature collected over a limestone formation. The results indicated that RMT is a faster and more reliable technique for studying electrical anisotropy than are azimuthal resistivity surveys. A new sequential inversion method to estimate hydraulic conductivity fields using crosshole GPR and tracer test data was applied to 2D synthetic examples. Given careful surveying, the results indicated that regularization of hydrogeological inverse problems using geophysical tomograms might improve models of hydraulic conductivity. A method to regularize geophysical inverse problems using geostatistical models was developed and applied to crosshole ERT and GPR data collected in unsaturated sandstone. The resulting models were geologically more reasonable than models where the regularization was based on traditional smoothness constraints. Electromagnetic geophysical techniques provide an inexpensive data source in estimating qualitative hydrogeological models, but hydrogeological data must be incorporated to make quantitative estimation of hydrogeological systems feasible.
283

Compressive Sensing for 3D Data Processing Tasks: Applications, Models and Algorithms

January 2012 (has links)
Compressive sensing (CS) is a novel sampling methodology representing a paradigm shift from conventional data acquisition schemes. The theory of compressive sensing ensures that under suitable conditions compressible signals or images can be reconstructed from far fewer samples or measurements than what are required by the Nyquist rate. So far in the literature, most works on CS concentrate on one-dimensional or two-dimensional data. However, besides involving far more data, three-dimensional (3D) data processing does have particularities that require the development of new techniques in order to make successful transitions from theoretical feasibilities to practical capacities. This thesis studies several issues arising from the applications of the CS methodology to some 3D image processing tasks. Two specific applications are hyperspectral imaging and video compression where 3D images are either directly unmixed or recovered as a whole from CS samples. The main issues include CS decoding models, preprocessing techniques and reconstruction algorithms, as well as CS encoding matrices in the case of video compression. Our investigation involves three major parts. (1) Total variation (TV) regularization plays a central role in the decoding models studied in this thesis. To solve such models, we propose an efficient scheme to implement the classic augmented Lagrangian multiplier method and study its convergence properties. The resulting Matlab package TVAL3 is used to solve several models. Computational results show that, thanks to its low per-iteration complexity, the proposed algorithm is capable of handling realistic 3D image processing tasks. (2) Hyperspectral image processing typically demands heavy computational resources due to an enormous amount of data involved. We investigate low-complexity procedures to unmix, sometimes blindly, CS compressed hyperspectral data to directly obtain material signatures and their abundance fractions, bypassing the high-complexity task of reconstructing the image cube itself. (3) To overcome the "cliff effect" suffered by current video coding schemes, we explore a compressive video sampling framework to improve scalability with respect to channel capacities. We propose and study a novel multi-resolution CS encoding matrix, and a decoding model with a TV-DCT regularization function. Extensive numerical results are presented, obtained from experiments that use not only synthetic data, but also real data measured by hardware. The results establish feasibility and robustness, to various extent, of the proposed 3D data processing schemes, models and algorithms. There still remain many challenges to be further resolved in each area, but hopefully the progress made in this thesis will represent a useful first step towards meeting these challenges in the future.
284

Estimating Seasonal Drivers in Childhood Infectious Diseases with Continuous Time Models

Abbott, George H. 2010 May 1900 (has links)
Many important factors affect the spread of childhood infectious disease. To understand better the fundamental drivers of infectious disease spread, several researchers have estimated seasonal transmission coefficients using discrete-time models. This research addresses several shortcomings of the discrete-time approaches, including removing the need for the reporting interval to match the serial interval of the disease using infectious disease data from three major cities: New York City, London, and Bangkok. Using a simultaneous approach for optimization of differential equation systems with a Radau collocation discretization scheme and total variation regularization for the transmission parameter profile, this research demonstrates that seasonal transmission parameters can be effectively estimated using continuous-time models. This research further correlates school holiday schedules with the transmission parameter for New York City and London where previous work has already been done, and demonstrates similar results for a relatively unstudied city in childhood infectious disease research, Bangkok, Thailand.
285

Combination Of Conventional Regularization Methods And Genetic Algorithms For Solving The Inverse Problem Of Electrocardiography

Sarikaya, Sedat 01 February 2010 (has links) (PDF)
Distribution of electrical potentials over the surface of the heart, i.e., the epicardial potentials, is a valuable tool to understand whether there is a defect in the heart. However, it is not easy to detect these potentials non-invasively. Instead, body surface potentials, which occur as a result of the electrical activity of the heart, are measured to diagnose heart defects. However the source electrical signals loose some critical details because of the attenuation and smoothing they encounter due to body tissues such as lungs, fat, etc. Direct measurement of these epicardial potentials requires invasive procedures. Alternatively, one can reconstruct the epicardial potentials non-invasively from the body surface potentials / this method is called the inverse problem of electrocardiography (ECG). The goal of this study is to solve the inverse problem of ECG using several well-known regularization methods and using their combinations with genetic algorihm (GA) and finally compare the performances of these methods. The results show that GA can be combined with the conventional regularization methods and their combination improves the regularization of ill-posed inverse ECG problem. In several studies, the results show that their combination provide a good scheme for solving the ECG inverse problem and the performance of regularization methods can be improved further. We also suggest that GA can be initiated succesfully with a training set of epicardial potentials, and with the optimum, over- and under-regularized Tikhonov regularization solutions.
286

Parameter Estimation In Generalized Partial Linear Modelswith Tikhanov Regularization

Kayhan, Belgin 01 September 2010 (has links) (PDF)
Regression analysis refers to techniques for modeling and analyzing several variables in statistical learning. There are various types of regression models. In our study, we analyzed Generalized Partial Linear Models (GPLMs), which decomposes input variables into two sets, and additively combines classical linear models with nonlinear model part. By separating linear models from nonlinear ones, an inverse problem method Tikhonov regularization was applied for the nonlinear submodels separately, within the entire GPLM. Such a particular representation of submodels provides both a better accuracy and a better stability (regularity) under noise in the data. We aim to smooth the nonparametric part of GPLM by using a modified form of Multiple Adaptive Regression Spline (MARS) which is very useful for high-dimensional problems and does not impose any specific relationship between the predictor and dependent variables. Instead, it can estimate the contribution of the basis functions so that both the additive and interaction effects of the predictors are allowed to determine the dependent variable. The MARS algorithm has two steps: the forward and backward stepwise algorithms. In the rst one, the model is built by adding basis functions until a maximum level of complexity is reached. On the other hand, the backward stepwise algorithm starts with removing the least significant basis functions from the model. In this study, we propose to use a penalized residual sum of squares (PRSS) instead of the backward stepwise algorithm and construct PRSS for MARS as a Tikhonov regularization problem. Besides, we provide numeric example with two data sets / one has interaction and the other one does not have. As well as studying the regularization of the nonparametric part, we also mention theoretically the regularization of the parametric part. Furthermore, we make a comparison between Infinite Kernel Learning (IKL) and Tikhonov regularization by using two data sets, with the difference consisting in the (non-)homogeneity of the data set. The thesis concludes with an outlook on future research.
287

Modern Mathematical Methods In Modeling And Dynamics Ofregulatory Systems Of Gene-environment Networks

Defterli, Ozlem 01 September 2011 (has links) (PDF)
Inferring and anticipation of genetic networks based on experimental data and environmental measurements is a challenging research problem of mathematical modeling. In this thesis, we discuss gene-environment network models whose dynamics are represented by a class of time-continuous systems of ordinary differential equations containing unknown parameters to be optimized. Accordingly, time-discrete version of that model class is studied and improved by using different numerical methods. In this aspect, 3rd-order Heun&rsquo / s method and 4th-order classical Runge-Kutta method are newly introduced, iteration formulas are derived and corresponding matrix algebras are newly obtained. We use nonlinear mixed-integer programming for the parameter estimation and present the solution of a constrained and regularized given mixed-integer problem. By using this solution and applying the 3rd-order Heun&rsquo / s and 4th-order classical Runge-Kutta methods in the timediscretized model, we generate corresponding time-series of gene-expressions by this thesis. Two illustrative numerical examples are studied newly with an artificial data set and a realworld data set which expresses a real phenomenon. All the obtained approximate results are compared to see the goodness of the new schemes. Different step-size analysis and sensitivity tests are also investigated to obtain more accurate and stable predictions of time-series results for a better service in the real-world application areas. The presented time-continuous and time-discrete dynamical models are identified based on given data, and studied by means of an analytical theory and stability theories of rarefication, regularization and robustification.
288

Minimum Norm Regularization of Descriptor Systems by Output Feedback

Chu, D., Mehrmann, V. 30 October 1998 (has links) (PDF)
We study the regularization problem for linear, constant coefficient descriptor systems $E x^. = AX + Bu, y_1 = Cx, y_2=\Gamma x^.$ by proportional and derivative mixed output feedback. Necessary and sufficient conditions are given, which guarantee that there exist output feedbacks such that the closed-loop system is regular, has index at most one and $E +BG\Gamma$ has a desired rank, i.e. there is a desired number of differential and algebraic equations. To resolve the freedom in the choice of the feedback matrices we then discuss how to obtain the desired regularizing feedback of minimum norm and show that this approach leads to useful results in the sense of robustness only if the rank of E is decreased. Numerical procedures are derived to construct the desired feedbacks gains. These numerical procedures are based on orthogonal matrix transformations which can be implemented in a numerically stable way.
289

On the effect of Lüders bands on the bending of steel tubes

Hallai, Julian de Freitas 01 February 2012 (has links)
In several practical applications, hot-finished steel pipe that exhibits Lüders bands is bent to strains of 2-3%. Lüders banding is a material instability that leads to inhomogeneous plastic deformation in the range of 1-4%. This work investigates the influence of Lüders banding on the inelastic response and stability of tubes under rotation controlled pure bending. It starts with the results of an experimental study involving tubes of several diameter-to-thickness ratios in the range of 33.2 to 14.7 and Lüders strains of 1.8% to 2.7%. In all cases, the initial elastic regime terminates at a local moment maximum and the local nucleation of narrow angled Lüders bands of higher strain on the tension and compression sides of the tube. As the rotation continues, the bands multiply and spread axially causing the affected zone to bend to a higher curvature while the rest of the tube is still at the curvature corresponding to the initial moment maximum. With further rotation of the ends, the higher curvature zone(s) gradually spreads while the moment remains essentially unchanged. For relatively low D/t tubes and/or short Lüders strains, the whole tube eventually is deformed to the higher curvature entering the usual hardening regime. Subsequently it continues to deform uniformly until the usual limit moment instability is reached. For high D/t tubes and/or materials with longer Lüders strains, the propagation of the larger curvature is interrupted by collapse when a critical length is Lüders deformed leaving behind part of the structure essentially undeformed. The higher the D/t and/or the longer the Lüders strain is, the shorter the critical length. This class of problems is analyzed using 3D finite elements while the material is modeled as an elastic-plastic solid with an “up-down-up” response over the extent of the Lüders strain, followed by hardening. The analysis reproduces the main features of the mechanical behavior provided the unstable part of the response is suitably calibrated. The uniform curvature elastic regime terminates with the nucleation of localized banded deformation. The bands appear in pockets on the most deformed sites of the tube and propagate into the hitherto intact part of the structure while the moment remains essentially unchanged. The Lüders-deformed section has a higher curvature, ovalizes more than the rest of the tube, and develops wrinkles with a characteristic wavelength. For every tube D/t there exists a threshold of Lüders strain separating the two types of behavior. This bounding value of Lüders strain was studied parametrically. / text
290

Seismic data processing with curvelets: a multiscale and nonlinear approach

Herrmann, Felix J. January 2007 (has links)
In this abstract, we present a nonlinear curvelet-based sparsity-promoting formulation of a seismic processing flow, consisting of the following steps: seismic data regularization and the restoration of migration amplitudes. We show that the curvelet's wavefront detection capability and invariance under the migration-demigration operator lead to a formulation that is stable under noise and missing data.

Page generated in 0.0795 seconds