• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 170
  • 169
  • 146
  • 1
  • 1
  • Tagged with
  • 490
  • 490
  • 482
  • 323
  • 320
  • 55
  • 45
  • 44
  • 41
  • 38
  • 35
  • 33
  • 33
  • 32
  • 30
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
71

Regularization methods for prediction in dynamic graphs and e-marketing applications

Richard, Émile 21 November 2012 (has links) (PDF)
Predicting connections among objects, based either on a noisy observation or on a sequence of observations, is a problem of interest for numerous applications such as recommender systems for e-commerce and social networks, and also in system biology, for inferring interaction patterns among proteins. This work presents formulations of the graph prediction problem, in both dynamic and static scenarios, as regularization problems. In the static scenario we encode the mixture of two different kinds of structural assumptions in a convex penalty involving the L1 and the trace norm. In the dynamic setting we assume that certain graph features, such as the node degree, follow a vector autoregressive model and we propose to use this information to improve the accuracy of prediction. The solutions of the optimization problems are studied both from an algorithmic and statistical point of view. Empirical evidences on synthetic and real data are presented showing the benefit of using the suggested methods.
72

Non-rigid image alignment for object recognition

Duchenne, Olivier 29 November 2012 (has links) (PDF)
La vision permet aux animaux de recueillir une information riche et détaillée sur leur environnent proche ou lointain. Les machines ont aussi accès à cette information riche via leurs caméras. Mais, elles n'ont pas encore le logiciel adéquat leur permettant de la traiter pour transformer les valeurs brutes des pixels de l'image en information plus utile telle que la nature, la position, et la fonction des objets environnants. Voilà une des raisons pour laquelle il leur est difficile de se mouvoir dans un environnement inconnu, et d'interagir avec les humains ou du matériel dans des scénarios non-planifiés. Cependant, la conception de ce logiciel comporte de multiples défis. Parmi ceux-ci, il est difficile de comparer deux images entre elles, par exemple, afin que la machine puisse reconnaître que ce qu'elle voit est similaire à une image qu'elle a déjà vue et identifiée. Une des raisons de cette difficulté est que la machine ne sait pas, a priori, quelles parties des deux images se correspondent, et ne sait donc pas quoi comparer avec quoi. Cette thèse s'attaque à ce problème et propose une série d'algorithmes permettant de trouver les parties correspondantes entre plusieurs images, ou en d'autre terme d'aligner les images. La première méthode proposée permet d'apparier ces parties de manière cohérente en prenant en compte les interactions entre plus de deux d'entre elles. Le deuxième algorithme proposé applique avec succès une méthode d'alignement pour déterminer la catégorie d'un objet centré dans une image. Le troisième est optimisé pour la vitesse et tente de détecter un objet d'une catégorie donné où qu'il soit dans l'image.
73

Machine learning methods for discrete multi-scale fows : application to finance

Mahler, Nicolas 05 June 2012 (has links) (PDF)
This research work studies the problem of identifying and predicting the trends of a single financial target variable in a multivariate setting. The machine learning point of view on this problem is presented in chapter I. The efficient market hypothesis, which stands in contradiction with the objective of trend prediction, is first recalled. The different schools of thought in market analysis, which disagree to some extent with the efficient market hypothesis, are reviewed as well. The tenets of the fundamental analysis, the technical analysis and the quantitative analysis are made explicit. We particularly focus on the use of machine learning techniques for computing predictions on time-series. The challenges of dealing with dependent and/or non-stationary features while avoiding the usual traps of overfitting and data snooping are emphasized. Extensions of the classical statistical learning framework, particularly transfer learning, are presented. The main contribution of this chapter is the introduction of a research methodology for developing trend predictive numerical models. It is based on an experimentation protocol, which is made of four interdependent modules. The first module, entitled Data Observation and Modeling Choices, is a preliminary module devoted to the statement of very general modeling choices, hypotheses and objectives. The second module, Database Construction, turns the target and explanatory variables into features and labels in order to train trend predictive numerical models. The purpose of the third module, entitled Model Construction, is the construction of trend predictive numerical models. The fourth and last module, entitled Backtesting and Numerical Results, evaluates the accuracy of the trend predictive numerical models over a "significant" test set via two generic backtesting plans. The first plan computes recognition rates of upward and downward trends. The second plan designs trading rules using predictions made over the test set. Each trading rule yields a profit and loss account (P&L), which is the cumulated earned money over time. These backtesting plans are additionally completed by interpretation functionalities, which help to analyze the decision mechanism of the numerical models. These functionalities can be measures of feature prediction ability and measures of model and prediction reliability. They decisively contribute to formulating better data hypotheses and enhancing the time-series representation, database and model construction procedures. This is made explicit in chapter IV. Numerical models, aiming at predicting the trends of the target variables introduced in chapter II, are indeed computed for the model construction methods described in chapter III and thoroughly backtested. The switch from one model construction approach to another is particularly motivated. The dramatic influence of the choice of parameters - at each step of the experimentation protocol - on the formulation of conclusion statements is also highlighted. The RNN procedure, which does not require any parameter tuning, has thus been used to reliably study the efficient market hypothesis. New research directions for designing trend predictive models are finally discussed.
74

Quelques problèmes d'écoulements multi-fluide : analyse mathématique, modélisation numérique et simulation

Benjelloun, Saad 03 December 2012 (has links) (PDF)
La présente thèse comporte trois parties indépendantes.<br>La première partie présente une preuve d'existence de solutions faibles globales pour un modèle de sprays de type Vlasov-Navier-Stokes-incompressible avec densité variable. Ce modèle est obtenu par une limite formelle à partir d'un modèle Vlasov-Navier-Stokes-incompressible avec fragmentation, où seules deux valeurs de rayons de particules sont considérées : un rayon r1 pour les particules avant fragmentation, et un rayon r2<
75

Structured sparsity-inducing norms : statistical and algorithmic properties with applications to neuroimaging

Jenatton, Rodolphe 24 November 2011 (has links) (PDF)
Numerous fields of applied sciences and industries have been recently witnessing a process of digitisation. This trend has come with an increase in the amount digital data whose processing becomes a challenging task. In this context, parsimony, also known as sparsity, has emerged as a key concept in machine learning and signal processing. It is indeed appealing to exploit data only via a reduced number of parameters. This thesis focuses on a particular and more recent form of sparsity, referred to as structured sparsity. As its name indicates, we shall consider situations where we are not only interested in sparsity, but where some structural prior knowledge is also available. The goal of this thesis is to analyze the concept of structured sparsity, based on statistical, algorithmic and applied considerations. To begin with, we introduce a family of structured sparsity-inducing norms whose statistical aspects are closely studied. In particular, we show what type of prior knowledge they correspond to. We then turn to sparse structured dictionary learning, where we use the previous norms within the framework of matrix factorization. From an optimization viewpoint, we derive several efficient and scalable algorithmic tools, such as working-set strategies and proximal-gradient techniques. With these methods in place, we illustrate on numerous real-world applications from various fields, when and why structured sparsity is useful. This includes, for instance, restoration tasks in image processing, the modelling of text documents as hierarchy of topics, the inter-subject prediction of sizes of objects from fMRI signals, and background-subtraction problems in computer vision.
76

Détection des grandes structures turbulentes dans les couches de mélange de type Rayleigh-Taylor en vue de la validation de modèles statistiques turbulents bi-structure

Watteaux, Romain 21 September 2011 (has links) (PDF)
Cette thèse a pour objectif de détecter les structures turbulentes aux grandes échelles présentes dans une couche de mélange de type Rayleigh-Taylor incompressible à faible nombre d'Atwood. Diverses grandeurs statistiques conditionnées par la présence de ces structures ont été obtenues, et il est désormais possible de les comparer avec les résultats des modèles statistiques turbulents dits bi-structure, tel le modèle 2SFK développé au CEA. Afin de réaliser les simulations numériques directes du mélange turbulent, un code numérique tridimensionnel incompressible à densité variable a été développé. Ce code a été parallélisé dans les trois directions. Plusieurs méthodes de détection de structure ont été conçues et testées. Bien que toutes ces méthodes présentent différents intérêts, seule la plus efficace vis-à-vis de nos critères de détection a été gardée pour faire des simulations à forte résolution (plus d'un milliard de mailles, 1024^3). Un filtrage temporel de la vitesse verticale est utilisé dans cette méthode de détection afin de : 1) corriger les distorsions dues aux points d'arrêt et zones de recirculation dans l'écoulement, 2) minimiser l'effet de la turbulence aux petites échelles et mieux mettre en évidence les grandes échelles, 3) introduire un effet mémoire permettant de prolonger la bimodalité du champ de détection depuis les zones laminaires extérieures jusqu'au centre de la zone de mélange turbulent. Plusieurs simulations numériques directes 1024^3 ont été effectuées. Les résultats viennent conforter ceux obtenus avec le modèle bi-structure 2SFK et justifient une étude plus poussée des grandeurs statistiques en vue de sa validation.
77

High precision camera calibration

Tang, Zhongwei 01 July 2011 (has links) (PDF)
The thesis focuses on precision aspects of 3D reconstruction with a particular emphasis on camera distortion correction. The causes of imprecisions in stereoscopy can be found at any step of the chain. The imprecision caused in a certain step will make useless the precision gained in the previous steps, then be propagated, amplified or mixed with errors in the following steps, finally leading to an imprecise 3D reconstruction. It seems impossible to directly improve the overall precision of a reconstruction chain leading to final imprecise 3D data. The appropriate approach to obtain a precise 3D model is to study the precision of every component. A maximal attention is paid to the camera calibration for three reasons. First, it is often the first component in the chain. Second, it is by itself already a complicated system containing many unknown parameters. Third, the intrinsic parameters of a camera only need to be calibrated once, depending on the camera configuration (and at constant temperature). The camera calibration problem is supposed to have been solved since years. Nevertheless, calibration methods and models that were valid for past precision requirements are becoming unsatisfying for new digital cameras permitting a higher precision. In our experiments, we regularly observed that current global camera methods can leave behind a residual distortion error as big as one pixel, which can lead to distorted reconstructed scenes. We propose two methods in the thesis to correct the distortion with a far higher precision. With an objective evaluation tool, it will be shown that the finally achievable correction precision is about 0.02 pixels. This value measures the average deviation of an observed straight line crossing the image domain from its perfectly straight regression line. High precision is also needed or desired for other image processing tasks crucial in 3D, like image registration. In contrast to the advance in the invariance of feature detectors, the matching precision has not been studied carefully. We analyze the SIFT method (Scale-invariant feature transform) and evaluate its matching precision. It will be shown that by some simple modifications in the SIFT scale space, the matching precision can be improved to be about 0.05 pixels on synthetic tests. A more realistic algorithm is also proposed to increase the registration precision for two real images when it is assumed that their transformation is locally smooth. A multiple-image denoising method, called ''burst denoising'', is proposed to take advantage of precise image registration to estimate and remove the noise at the same time. This method produces an accurate noise curve, which can be used to guide the denoising by the simple averaging and classic block matching method. ''burst denoising'' is particularly powerful to recover fine non-periodic textured part in images, even compared to the best state of the art denoising method.
78

Discrete shape modeling for geometrical product specification : contributions and applications to skin model simulation

Zhang, Min 17 October 2011 (has links) (PDF)
The management and the control of product geometrical variations during the whole development process is an important issue for cost reduction, quality improvement and company competitiveness in the global manufacturing era. During the design phase, geometric functional requirements and tolerances are derived from the design intent. Geometric modeling tools, now largely support the modeling of product shapes and dimensions. However, permissible geometrical variations cannot be intuitively assessed using existing modeling tools. In addition, the manufacturing and measurement stages are two main geometrical variations generators according to the two well know axioms of manufacturing imprecision and measurement uncertainty. A comprehensive view of Geometrical Product Specifications should consider not only the complete tolerancing process, tolerance modeling and tolerance representation but also shape geometric representations, and suitable processing techniques and algorithms. GeoSpelling as the basis of the GPS standard enables a comprehensive modeling framework and an unambiguous language to describe geometrical variations covering the overall product life cycle thanks to a set of concepts and operations based on the fundamental concept of the "Skin Model". However, the "operationalization" of GeoSpelling has not been successfully completed and few research studies have focused on the skin model simulation. The skin model as a discrete shape model is the main focus of this dissertation. We investigate here discrete geometry fundamentals of GeoSpelling, Monte Carlo Simulation Techniques and Statistical Shape Analysis methods to simulate and analyze "realistic shapes" when considering geometrical constraints requirements (derived from functional specifications and manufacturing considerations). In addition to mapping fundamental concepts and operations to discrete geometry one's, the work presented here investigates a discrete shape model for both random and systematic errors when taking into account second order approximation of shapes. The concept of a mean skin model and its robust statistics are also developed. The results of the skin model simulations and visualizations are reported. By means of a case study based on a cross-shaped sheet metal part where the manufacturing process is simulated using Finite Element Analysis considering stochastic variations, the results of the skin model simulations are shown, and the performances of the method are described.
79

Recollements de morceaux de cyclides de Dupin pour la modélisation et la reconstruction 3D : étude dans l'espace des sphères

Druoton, Lucie 04 April 2013 (has links) (PDF)
La thèse porte sur le raccordement de surfaces canal en modélisation géométriques en utilisant des morceaux de cyclides de Dupin. Elle tente de répondre à un problème de reconstruction de pièces controlées et usinées par le CEA de Valduc. En se plaçant dans l'espace adéquat, l'espace des sphères, dans lequel nous pouvons manipuler à la fois les points, les sphères et les surfaces canal, nous simplifions considérablement certains problèmes. Cet espace est représenté par une quadrique de dimension 4 dans un espace de dimension 5, muni de la forme de Lorentz : l'espace de Lorentz. Dans l'espace des sphères, les problèmes de recollements de surfaces canal par des morceaux de cyclides de Dupin se simplifient en problèmes linéaires. Nous donnons les algorithmes permettant de réaliser ce type de jointures en utilisant l'espace des sphères puis nous revenons dans l'espace à 3 dimensions usuel. Ces jointures se font toujours le long de cercles caractéristiques des surfaces considérées. En résolvant le problème dit des trois conditions de contact, nous mettons en évidence une autre courbe particulière, sur une famille à un paramètre de cyclides, que nous appellons courbe de contact qui permettrait d'effectuer des jointures le long d'autres courbes
80

Processus spatio-temporels en géométrie stochastique et application à la modélisation de réseaux de télécommunication

Morlot, Frédéric 02 July 2012 (has links) (PDF)
L'objectif de cette thèse est de réunir les deux approches suivantes qui existent actuellement pour étudier une foule: ou bien à temps fixé on s'intéresse à la distribution spatiale des individus, ou bien on suit un seul individu à la fois au cours du temps. On se propose de construire des processus spatio-temporels, qui, comme leur nom l'indique, permettraient de rendre compte du caractère aléatoire des usages d'une foule dans un réseau de télécommunication, à la fois du point de vue spatial (modèles de route) et du point de vue temporel (déplacements sur ces routes, usages qui varient au cours de ces déplacements...). Une fois ces processus construits de manière rigoureuse, on étudie leur comportement d'une manière fine. Nous développons trois modèles différents qui chacun mènent à des formules analytiques fermées, ce qui permet de les utiliser d'une manière très confortable à des fins de dimensionnement.

Page generated in 0.0778 seconds