181 |
An Approach for the Adaptive Solution of Optimization Problems Governed by Partial Differential Equations with Uncertain CoefficientsKouri, Drew 05 September 2012 (has links)
Using derivative based numerical optimization routines to solve optimization problems governed by partial differential equations (PDEs) with uncertain coefficients is computationally expensive due to the large number of PDE solves required at each iteration. In this thesis, I present an adaptive stochastic collocation framework for the discretization and numerical solution of these PDE constrained optimization problems. This adaptive approach is based on dimension adaptive sparse grid interpolation and employs trust regions to manage the adapted stochastic collocation models. Furthermore, I prove the convergence of sparse grid collocation methods applied to these optimization problems as well as the global convergence of the retrospective trust region algorithm under weakened assumptions on gradient inexactness. In fact, if one can bound the error between actual and modeled gradients using reliable and efficient a posteriori error estimators, then the global convergence of the proposed algorithm follows. Moreover, I describe a high performance implementation of my adaptive collocation and trust region framework using the C++ programming language with the Message Passing interface (MPI). Many PDE solves are required to accurately quantify the uncertainty in such optimization problems, therefore it is essential to appropriately choose inexpensive approximate models and large-scale nonlinear programming techniques throughout the optimization routine. Numerical results for the adaptive solution of these optimization problems are presented.
|
182 |
A novel approach to restoration of Poissonian imagesShaked, Elad 09 February 2010 (has links)
The problem of reconstruction of digital images from their degraded measurements is regarded as a problem of central importance in various fields of engineering and imaging sciences. In such cases, the degradation is typically caused by the resolution limitations of an imaging device in use and/or by the destructive influence of measurement noise. Specifically, when the noise obeys a Poisson probability law, standard approaches to the problem of image reconstruction are based on using fixed-point algorithms which follow the methodology proposed by Richardson and Lucy in the beginning of the 1970s. The practice of using such methods, however, shows that their convergence properties tend to deteriorate at relatively high noise levels (which typically takes place in so-called low-count settings). This work introduces a novel method for de-noising and/or de-blurring of digital images that have been corrupted by Poisson noise. The proposed method is derived using the framework of MAP estimation, under the assumption that the image of interest can be sparsely represented in the domain of a properly designed linear transform. Consequently, a shrinkage-based iterative procedure is proposed, which guarantees the maximization of an associated maximum-a-posteriori criterion. It is shown in a series of both computer-simulated and real-life experiments that the proposed method outperforms a number of existing alternatives in terms of stability, precision, and computational efficiency.
|
183 |
Data-guided statistical sparse measurements modeling for compressive sensingSchwartz, Tal Shimon January 2013 (has links)
Digital image acquisition can be a time consuming process for situations where high spatial resolution is required. As such, optimizing the acquisition mechanism is of high importance for many measurement applications. Acquiring such data through a dynamically small subset of measurement locations can address this problem. In such a case, the measured information can be regarded as incomplete, which necessitates the application of special reconstruction tools to recover the original data set. The reconstruction can be performed based on the concept of sparse signal representation. Recovering signals and images from their sub-Nyquist measurements forms the core idea of compressive sensing (CS). In this work, a CS-based data-guided statistical sparse measurements method is presented, implemented and evaluated. This method significantly improves image reconstruction from sparse measurements. In the data-guided statistical sparse measurements approach, signal sampling distribution is optimized for improving image reconstruction performance. The sampling distribution is based on underlying data rather than the commonly used uniform random distribution. The optimal sampling pattern probability is accomplished by learning process through two methods - direct and indirect. The direct method is implemented for learning a nonparametric probability density function directly from the dataset. The indirect learning method is implemented for cases where a mapping between extracted features and the probability density function is required. The unified model is implemented for different representation domains, including frequency domain and spatial domain. Experiments were performed for multiple applications such as optical coherence tomography, bridge structure vibration, robotic vision, 3D laser range measurements and fluorescence microscopy. Results show that the data-guided statistical sparse measurements method significantly outperforms the conventional CS reconstruction performance. Data-guided statistical sparse measurements method achieves much higher reconstruction signal-to-noise ratio for the same compression rate as the conventional CS. Alternatively, Data-guided statistical sparse measurements method achieves similar reconstruction signal-to-noise ratio as the conventional CS with significantly fewer samples.
|
184 |
Kernelized Supervised Dictionary LearningJabbarzadeh Gangeh, Mehrdad 24 April 2013 (has links)
The representation of a signal using a learned dictionary instead of predefined operators, such as wavelets, has led to state-of-the-art results in various applications such as denoising, texture analysis, and face recognition. The area of dictionary learning is closely associated with sparse representation, which means that the signal is represented using few atoms in the dictionary. Despite recent advances in the computation of a dictionary using fast algorithms such as K-SVD, online learning, and cyclic coordinate descent, which make the computation of a dictionary from millions of data samples computationally feasible, the dictionary is mainly computed using unsupervised approaches such as k-means. These approaches learn the dictionary by minimizing the reconstruction error without taking into account the category information, which is not optimal in classification tasks.
In this thesis, we propose a supervised dictionary learning (SDL) approach by incorporating information on class labels into the learning of the dictionary. To this end, we propose to learn the dictionary in a space where the dependency between the signals and their corresponding labels is maximized. To maximize this dependency, the recently-introduced Hilbert Schmidt independence criterion (HSIC) is used. The learned dictionary is compact and has closed form; the proposed approach is fast. We show that it outperforms other unsupervised and supervised dictionary learning approaches in the literature on real-world data.
Moreover, the proposed SDL approach has as its main advantage that it can be easily kernelized, particularly by incorporating a data-driven kernel such as a compression-based kernel, into the formulation. In this thesis, we propose a novel compression-based (dis)similarity measure. The proposed measure utilizes a 2D MPEG-1 encoder, which takes into consideration the spatial locality and connectivity of pixels in the images. The proposed formulation has been carefully designed based on MPEG encoder functionality. To this end, by design, it solely uses P-frame coding to find the (dis)similarity among patches/images. We show that the proposed measure works properly on both small and large patch sizes on textures. Experimental results show that by incorporating the proposed measure as a kernel into our SDL, it significantly improves the performance of a supervised pixel-based texture classification on Brodatz and outdoor images compared to other compression-based dissimilarity measures, as well as state-of-the-art SDL methods. It also improves the computation speed by about 40% compared to its closest rival.
Eventually, we have extended the proposed SDL to multiview learning, where more than one representation is available on a dataset. We propose two different multiview approaches: one fusing the feature sets in the original space and then learning the dictionary and sparse coefficients on the fused set; and the other by learning one dictionary and the corresponding coefficients in each view separately, and then fusing the representations in the space of the dictionaries learned. We will show that the proposed multiview approaches benefit from the complementary information in multiple views, and investigate the relative performance of these approaches in the application of emotion recognition.
|
185 |
Denoising of Infrared Images Using Independent Component AnalysisBjörling, Robin January 2005 (has links)
Denna uppsats syftar till att undersöka användbarheten av metoden Independent Component Analysis (ICA) för brusreducering av bilder tagna av infraröda kameror. Speciellt fokus ligger på att reducera additivt brus. Bruset delas upp i två delar, det Gaussiska bruset samt det sensorspecifika mönsterbruset. För att reducera det Gaussiska bruset används en populär metod kallad sparse code shrinkage som bygger på ICA. En ny metod, även den byggandes på ICA, utvecklas för att reducera mönsterbrus. För varje sensor utförs, i den nya metoden, en analys av bilddata för att manuellt identifiera typiska mönsterbruskomponenter. Dessa komponenter används därefter för att reducera mönsterbruset i bilder tagna av den aktuella sensorn. Det visas att metoderna ger goda resultat på infraröda bilder. Algoritmerna testas både på syntetiska såväl som på verkliga bilder och resultat presenteras och jämförs med andra algoritmer. / The purpose of this thesis is to evaluate the applicability of the method Independent Component Analysis (ICA) for noise reduction of infrared images. The focus lies on reducing the additive uncorrelated noise and the sensor specific additive Fixed Pattern Noise (FPN). The well known method sparse code shrinkage, in combination with ICA, is applied to reduce the uncorrelated noise degrading infrared images. The result is compared to an adaptive Wiener filter. A novel method, also based on ICA, for reducing FPN is developed. An independent component analysis is made on images from an infrared sensor and typical fixed pattern noise components are manually identified. The identified components are used to fast and effectively reduce the FPN in images taken by the specific sensor. It is shown that both the FPN reduction algorithm and the sparse code shrinkage method work well for infrared images. The algorithms are tested on synthetic as well as on real images and the performance is measured.
|
186 |
Simultaneous Localization And Mapping Using a Kinect in a Sparse Feature Indoor Environment / Simultan lokalisering och kartering med hjälp av en Kinect i en inomhusmiljö med få landmärkenHjelmare, Fredrik, Rangsjö, Jonas January 2012 (has links)
Localization and mapping are two of the most central tasks when it comes toautonomous robots. It has often been performed using expensive, accurate sensorsbut the fast development of consumer electronics has made similar sensorsavailable at a more affordable price. In this master thesis a TurtleBot\texttrademark\, robot and a MicrosoftKinect\texttrademark\, camera are used to perform Simultaneous Localization AndMapping, SLAM. The thesis presents modifications to an already existing opensource SLAM algorithm. The original algorithm, based on visual odometry, isextended so that it can also make use of measurements from wheel odometry and asingle axis gyro. Measurements are fused using an Extended Kalman Filter,EKF, operating in a multirate fashion. Both the SLAM algorithm and the EKF areimplemented in C++ using the framework Robot Operating System, ROS. The implementation is evaluated on two different data sets. One set isrecorded in an ordinary office room which constitutes an environment with manylandmarks. The other set is recorded in a conference room where one of the wallsis flat and white. This gives a partially sparse featured environment. The result by providing additional sensor information is a more robust algorithm.Periods without credible visual information does not make the algorithm lose itstrack and the algorithm can thus be used in a larger variety of environmentsincluding such where the possibility to extract landmarks is low. The resultalso shows that the visual odometry can cancel out drift introduced bywheel odometry and gyro sensors.
|
187 |
Design of Fast Multidimensional Filters by Genetic AlgorithmsLanger, Max January 2004 (has links)
The need for fast multidimensional signal processing arises in many areas. One of the more demanding applications is real time visualization of medical data acquired with e.g. magnetic resonance imaging where large amounts of data can be generated. This data has to be reduced to relevant clinical information, either by image reconstruction and enhancement or automatic feature extraction. Design of fast-acting multidimensional filters has been subject to research during the last three decades. Usually methods for fast filtering are based on applying a sequence of filters of lower dimensionality acquired by e.g. weighted low-rank approximation. Filter networks is a method to design fast multidimensional filters by decomposing multiple filters into simpler filter components in which coefficients are allowed to be sparsely scattered. Up until now, coefficient placement has been done by hand, a procedure which is time-consuming and difficult. The aim of this thesis is to investigate whether genetic algorithms can be used to place coefficients in filter networks. A method is developed and tested on 2-D filters and the resulting filters have lower distortion values while still maintaining the same or lower number of coefficients than filters designed with previously known methods.
|
188 |
Implementation and Performance Analysis of FilternetsEinarsson, Henrik January 2006 (has links)
Today Image acquisition equipment produces huge amounts of data that needs to be processed. Often the data describes signals with a dimensionality higher then 2, as with ordinary images. This introduce a problem when it comes to process this high dimensional data since ordinary signal processing tools are no longer suitable. New faster and more efficient tools need to be developed to fully exploit the advantages with e. g. a 3D CT-scan. One such tool is filternets, a layered networklike structure, which the signal propagates through. A filternet has three fundamental advantages which will decrease the filtering time. The network structure allows complex filter to be decomposed into simpler ones, intermediate result may be reused and filters may be implemented with very few nonzero coefficients (sparse filters). The aim of this study has been to create an implementation for filternets and optimize it with respect to execution time. Specially the possibility to use filternets that approximates a harmonic filterset for estimating orientation in 3D signals is investigated. Tests show that this method is up to about 30 times faster than a full filterset consisting of dense filters. They also show a slightly larger error in the estimated orientation compared with the dense filters, this error should however not limit the usability of the method.
|
189 |
A novel approach to restoration of Poissonian imagesShaked, Elad 09 February 2010 (has links)
The problem of reconstruction of digital images from their degraded measurements is regarded as a problem of central importance in various fields of engineering and imaging sciences. In such cases, the degradation is typically caused by the resolution limitations of an imaging device in use and/or by the destructive influence of measurement noise. Specifically, when the noise obeys a Poisson probability law, standard approaches to the problem of image reconstruction are based on using fixed-point algorithms which follow the methodology proposed by Richardson and Lucy in the beginning of the 1970s. The practice of using such methods, however, shows that their convergence properties tend to deteriorate at relatively high noise levels (which typically takes place in so-called low-count settings). This work introduces a novel method for de-noising and/or de-blurring of digital images that have been corrupted by Poisson noise. The proposed method is derived using the framework of MAP estimation, under the assumption that the image of interest can be sparsely represented in the domain of a properly designed linear transform. Consequently, a shrinkage-based iterative procedure is proposed, which guarantees the maximization of an associated maximum-a-posteriori criterion. It is shown in a series of both computer-simulated and real-life experiments that the proposed method outperforms a number of existing alternatives in terms of stability, precision, and computational efficiency.
|
190 |
Comparison of Classification Effects of Principal Component and Sparse Principal Component Analysis for Cardiology Ultrasound in Left VentricleYang, Hsiao-ying 05 July 2012 (has links)
Due to the association of heart diseases and the patterns of the diastoles and systoles of heart in left ventricle, we analyze and classify the data gathered form Kaohsiung Veterans General Hospital by using the cardiology ultrasound images. We make use of the differences between the gray-scale values of diastoles and systoles in left ventricle to evaluate the function of heart. Following Chen (2011) and Kao (2011), we modified the way about the reduction and alignment of the image data. We also add some more subjects into the study.
We treat images in two manners, saving the parts of concern. Since the ultrasound image after transformation to data form is expressed as a high-dimensional matrix, the principal component analysis is adapted to retain the important factors and reduce the dimensions. In this work, we compare the loadings calculated by the usual principal and sparse principal component analysis, then the factor scores are used to carry out the discriminant analysis and discuss the accuracy of classification. By the statistical methods in this work, the accuracy, sensitivity and specificity of the original classifications are over 80% and the cross validations are over 60%.
|
Page generated in 0.0302 seconds