• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 286
  • 171
  • 44
  • 32
  • 10
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 607
  • 143
  • 102
  • 89
  • 87
  • 78
  • 77
  • 69
  • 68
  • 68
  • 61
  • 58
  • 54
  • 53
  • 52
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Non parametric density estimation via regularization

Lin, Mu 11 1900 (has links)
The thesis aims at showing some important methods, theories and applications about non-parametric density estimation via regularization in univariate setting. It gives a brief introduction to non-parametric density estimation, and discuss several well-known methods, for example, histogram and kernel methods. Regularized methods with penalization and shape constraints are the focus of the thesis. Maximum entropy density estimation is introduced and the relationship between taut string and maximum entropy density estimation is explored. Furthermore, the dual and primal theories are discussed and some theoretical proofs corresponding to quasi-concave density estimation are presented. Different the numerical methods of non-parametric density estimation with regularization are classified and compared. Finally, a real data experiment will also be discussed in the last part of the thesis. / Statistics
12

Non parametric density estimation via regularization

Lin, Mu Unknown Date
No description available.
13

Model selection and estimation in high dimensional settings

Ngueyep Tzoumpe, Rodrigue 08 June 2015 (has links)
Several statistical problems can be described as estimation problem, where the goal is to learn a set of parameters, from some data, by maximizing a criterion. These type of problems are typically encountered in a supervised learning setting, where we want to relate an output (or many outputs) to multiple inputs. The relationship between these outputs and these inputs can be complex, and this complexity can be attributed to the high dimensionality of the space containing the inputs and the outputs; the existence of a structural prior knowledge within the inputs or the outputs that if ignored may lead to inefficient estimates of the parameters; and the presence of a non-trivial noise structure in the data. In this thesis we propose new statistical methods to achieve model selection and estimation when there are more predictors than observations. We also design a new set of algorithms to efficiently solve the proposed statistical models. We apply the implemented methods to genetic data sets of cancer patients and to some economics data.
14

Multiresolution tomography for the ionosphere

Panicciari, Tommaso January 2016 (has links)
The ionosphere is a dynamic and ionized medium. Specification of the ionospheric electron density is important for radio systems operating up to a few GHz. Such systems include communication, navigation and surveillance operations. Computerized Ionospheric Tomography (CIT) is a technique that allows specification of the electron density in the ionosphere. CIT, unlike medical tomography, has geometric limitations such as uneven and sparse distribution of ground-based receivers and limited-angle observations. The inversion is therefore underdetermined and to overcome the geometric limitations of the problem, regularization techniques need to be used. In this thesis the horizontal variation of the ionosphere is represented using wavelet basis functions. Wavelets are chosen because the ground based ionospheric instrumentation is unevenly distributed and hence there is an expectation that the resolution of the tomographic image will change across a large region of interest. Wavelets are able to represent structures with different scale and position efficiently, which is known as Multi Resolution Analysis (MRA). The theory of sparse regularization allows the usage of a small number of basis functions with minimum loss of information. Furthermore, sparsity through wavelets can better differentiate between noise and actual information. This is advantageous because it increases the efficacy to resolve the structures of the ionosphere at different spatial horizontal scale sizes. The basis set is also extended to incorporate time dependence in the tomographic images by means of three-dimensional wavelets. The methods have been tested using both simulated and real observations from the Global Navigation Satellite System (GNSS). The simulation was necessary in order to have a controllable environment where the ability to resolve different scale structures would be tested. The further analysis of the methods required also the use of real observations. They tested the technique under conditions of temporal dynamics that would be more difficult to reproduce with simulations, which often tend to be valid in quiet ionospheric behaviors. Improvements in the detection and reconstruction of ionospheric structures were illustrated with sparse regularization. The comparison was performed against two standard methods. The first one was based on spherical harmonics in space, whilst the second relied on a time-dependent smoothing regularization. In simulation, wavelets showed the possibility to resolve small-scale structures better than spherical harmonics and illustrated the potential of creating ionospheric maps at high resolution. In reality, GNSS satellite orbits allow satellite to receiver datasets that traverse the ionosphere at a few hundred km per second and hence a long time window of typically half an hour may be required to provide observations. The assumption of an unchanging ionosphere is only valid at some locations under very quiet geomagnetic conditions and at certain times of day. For this reason the theory was extended to include time dependence in the wavelet method. This was obtained by considering two approaches: a time-smooth regularization and three-dimensional wavelets. The wavelet method was illustrated on a European dataset and demonstrated some improvements in the reconstructions of the main trough. In conclusion wavelets and sparse regularization were demonstrated to be a valid alternative to more standard methods.
15

Networks and the Best Approximation Property

Girosi, Federico, Poggio, Tomaso 01 October 1989 (has links)
Networks can be considered as approximation schemes. Multilayer networks of the backpropagation type can approximate arbitrarily well continuous functions (Cybenko, 1989; Funahashi, 1989; Stinchcombe and White, 1989). We prove that networks derived from regularization theory and including Radial Basis Function (Poggio and Girosi, 1989), have a similar property. From the point of view of approximation theory, however, the property of approximating continous functions arbitrarily well is not sufficient for characterizing good approximation schemes. More critical is the property of best approximation. The main result of this paper is that multilayer networks, of the type used in backpropagation, are not best approximation. For regularization networks (in particular Radial Basis Function networks) we prove existence and uniqueness of best approximation.
16

Sparse Value Function Approximation for Reinforcement Learning

Painter-Wakefield, Christopher Robert January 2013 (has links)
<p>A key component of many reinforcement learning (RL) algorithms is the approximation of the value function. The design and selection of features for approximation in RL is crucial, and an ongoing area of research. One approach to the problem of feature selection is to apply sparsity-inducing techniques in learning the value function approximation; such sparse methods tend to select relevant features and ignore irrelevant features, thus automating the feature selection process. This dissertation describes three contributions in the area of sparse value function approximation for reinforcement learning.</p><p>One method for obtaining sparse linear approximations is the inclusion in the objective function of a penalty on the sum of the absolute values of the approximation weights. This <italic>L<sub>1</sub></italic> regularization approach was first applied to temporal difference learning in the LARS-inspired, batch learning algorithm LARS-TD. In our first contribution, we define an iterative update equation which has as its fixed point the <italic>L<sub>1</sub></italic> regularized linear fixed point of LARS-TD. The iterative update gives rise naturally to an online stochastic approximation algorithm. We prove convergence of the online algorithm and show that the <italic>L<sub>1</sub></italic> regularized linear fixed point is an equilibrium fixed point of the algorithm. We demonstrate the ability of the algorithm to converge to the fixed point, yielding a sparse solution with modestly better performance than unregularized linear temporal difference learning.</p><p>Our second contribution extends LARS-TD to integrate policy optimization with sparse value learning. We extend the <italic>L<sub>1</sub></italic> regularized linear fixed point to include a maximum over policies, defining a new, "greedy" fixed point. The greedy fixed point adds a new invariant to the set which LARS-TD maintains as it traverses its homotopy path, giving rise to a new algorithm integrating sparse value learning and optimization. The new algorithm is demonstrated to be similar in performance with policy iteration using LARS-TD.</p><p>Finally, we consider another approach to sparse learning, that of using a simple algorithm that greedily adds new features. Such algorithms have many of the good properties of the <italic>L<sub>1</sub></italic> regularization methods, while also being extremely efficient and, in some cases, allowing theoretical guarantees on recovery of the true form of a sparse target function from sampled data. We consider variants of orthogonal matching pursuit (OMP) applied to RL. The resulting algorithms are analyzed and compared experimentally with existing <italic>L<sub>1</sub></italic> regularized approaches. We demonstrate that perhaps the most natural scenario in which one might hope to achieve sparse recovery fails; however, one variant provides promising theoretical guarantees under certain assumptions on the feature dictionary while another variant empirically outperforms prior methods both in approximation accuracy and efficiency on several benchmark problems.</p> / Dissertation
17

Sparseness-constrained seismic deconvolution with curvelets

Hennenfent, Gilles, Herrmann, Felix J., Neelamani, Ramesh January 2005 (has links)
Continuity along reflectors in seismic images is used via Curvelet representation to stabilize the convolution operator inversion. The Curvelet transform is a new multiscale transform that provides sparse representations for images that comprise smooth objects separated by piece-wise smooth discontinuities (e.g. seismic images). Our iterative Curvelet-regularized deconvolution algorithm combines conjugate gradient-based inversion with noise regularization performed using non-linear Curvelet coefficient thresholding. The thresholding operation enhances the sparsity of Curvelet representations. We show on a synthetic example that our algorithm provides improved resolution and continuity along reflectors as well as reduced ringing effect compared to the iterative Wiener-based deconvolution approach.
18

Predictive identification of alternative events conserved in human and mouse

Yeo, Gene, Van Nostrand, Eric, Holste, Dirk, Poggio, Tomaso, Burge, Christopher 30 September 2004 (has links)
Alternative pre-messenger RNA splicing affects a majority of human genes and plays important roles in development and disease. Alternative splicing (AS) events conserved since the divergence of human and mouse are likely of primary biological importance, but relatively few such events are known. Here we describe sequence features that distinguish exons subject to evolutionarily conserved AS, which we call 'alternative-conserved exons' (ACEs) from other orthologous human/mouse exons, and integrate these features into an exon classification algorithm, ACEScan. Genome-wide analysis of annotated orthologous human-mouse exon pairs identified ~2,000 predicted ACEs. Alternative splicing was verified in both human and mouse tissues using an RT-PCR-sequencing protocol for 21 of 30 (70%) predicted ACEs tested, supporting the validity of a majority of ACEScan predictions. By contrast, AS was observed in mouse tissues for only 2 of 15 (13%) tested exons which had EST or cDNA evidence of AS in human but were not predicted ACEs, and was never observed for eleven negative control exons in human or mouse tissues. Predicted ACEs were much more likely to preserve reading frame, and less likely to disrupt protein domains than other AS events, and were enriched in genes expressed in the brain and in genes involved in transcriptional regulation, RNA processing and development. Our results also imply that the vast majority of AS events represented in the human EST databases are not conserved in mouse, and therefore may represent aberrant, disease- or allele-specific, or highly lineage-restricted splicing events.
19

High-order extension of the recursive regularized lattice Boltzmann method

Coreixas, Christophe Guy 22 February 2018 (has links) (PDF)
This thesis is dedicated to the derivation and the validation of a new collision model as a stabilization technique for high-order lattice Boltzmann methods (LBM). More specifically, it intends to stabilize simulations of: (1) isothermal and weakly compressible flows at high Reynolds numbers, and (2) fully compressible flows including discontinuities such as shock waves. The new collision model relies on an enhanced regularization step. The latter includes a recursive computation of nonequilibrium Hermite polynomial coefficients. These recursive formulas directly derive from the Chapman-Enskog expansion, and allow to properly filter out second- (and higher-) order nonhydrodynamic contributions in underresolved conditions. This approach is even more interesting since it is compatible with a very large number of velocity sets. This high-order LBM is first validated in the isothermal case, and for high-Reynolds number flows. The coupling with a shock-capturing technique allows to further extend its validity domain to the simulation of fully compressible flows including shockwaves. The present work ends with the linear stability analysis(LSA) of the new approach, in the isothermal case. This leads to a proper quantification of the impact induced by each discretization (velocity and numerical) on the spectral properties of the related set of equations. The LSA of the recursive regularized LBM finally confirms the drastic stability gain obtained with this new approach.
20

Regularized Numerical Algorithms For Stable Parameter Estimation In Epidemiology And Implications For Forecasting

DeCamp, Linda 08 August 2017 (has links)
When an emerging outbreak occurs, stable parameter estimation and reliable projections of future incidence cases using limited (early) data can play an important role in optimal allocation of resources and in the development of effective public health intervention programs. However, the inverse parameter identification problem is ill-posed and cannot be solved with classical tools of computational mathematics. In this dissertation, various regularization methods are employed to incorporate stability in parameter estimation algorithms. The recovered parameters are then used to generate future incident curves as well as the carrying capacity of the epidemic and the turning point of the outbreak. For the nonlinear generalized Richards model of disease progression, we develop a novel iteratively regularized Gauss-Newton-type algorithm to reconstruct major characteristics of an emerging infection. This problem-oriented numerical scheme takes full advantage of a priori information available for our specific application in order to stabilize the iterative process. Another important aspect of our research is a reliable estimation of time-dependent transmission rate in a compartmental SEIR disease model. To that end, the ODE-constrained minimization problem is reduced to a linear Volterra integral equation of the first kind, and a combination of regularizing filters is employed to approximate the unknown transmission parameter in a stable manner. To justify our theoretical findings, extensive numerical experiments have been conducted with both synthetic and real data for various infectious diseases.

Page generated in 0.1132 seconds