• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 24
  • 6
  • 5
  • 3
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 60
  • 11
  • 10
  • 9
  • 9
  • 9
  • 8
  • 8
  • 8
  • 7
  • 7
  • 6
  • 6
  • 6
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

Codage multi-vues multi-profondeur pour de nouveaux services multimédia / Multiview video plus depth coding for new multimedia services

Mora, Elie-Gabriel 04 February 2014 (has links)
Les travaux effectués durant cette thèse de doctorat ont pour but d’augmenter l’efficacité de codage dans 3D-HEVC. Nous proposons des approches conventionnelles orientées vers la normalisation vidéo, ainsi que des approches en rupture basées sur le flot optique. En approches conventionnelles, nous proposons une méthode qui prédit les modes Intra de profondeur avec ceux de texture. L’héritage est conditionné par un critère qui mesure le degré de similitude entre les deux modes. Ensuite, nous proposons deux méthodes pour améliorer la prédiction inter-vue du mouvement dans 3D-HEVC. La première ajoute un vecteur de disparité comme candidat inter-vue dans la liste des candidats du Merge, et la seconde modifie le processus de dérivation de ce vecteur. Finalement, un outil de codage intercomposantes est proposé, où le lien entre les arbres quaternaires de texture et de profondeur est exploité pour réduire le temps d’encodage et le débit, à travers un codage conjoint des deux arbres. Dans la catégorie des approches en rupture, nous proposons deux méthodes basées sur l’estimation de champs denses de vecteurs de mouvement en utilisant le flot optique. La première calcule un champ au niveau d’une vue de base reconstruite, puis l’extrapole au niveau d’une vue dépendante, où il est hérité par les unités de prédiction en tant que candidat dense du Merge. La deuxième méthode améliore la synthèse de vues : quatre champs sont calculés au niveau de deux vues de référence en utilisant deux références temporelles. Ils sont ensuite extrapolés au niveau d’une vue synthétisée et corrigés en utilisant une contrainte épipolaire. Les quatre prédictions correspondantes sont ensuite combinées. / This PhD. thesis deals with improving the coding efficiency in 3D-HEVC. We propose both constrained approaches aimed towards standardization, and also more innovative approaches based on optical flow. In the constrained approaches category, we first propose a method that predicts the depth Intra modes using the ones of the texture. The inheritance is driven by a criterion measuring how much the two are expected to match. Second, we propose two simple ways to improve inter-view motion prediction in 3D-HEVC. The first adds an inter-view disparity vector candidate in the Merge list and the second modifies the derivation process of this disparity vector. Third, an inter-component tool is proposed where the link between the texture and depth quadtree structures is exploited to save both runtime and bits through a joint coding of the quadtrees. In the more innovative approaches category, we propose two methods that are based on a dense motion vector field estimation using optical flow. The first computes such a field on a reconstructed base view. It is then warped at the level of a dependent view where it is inserted as a dense candidate in the Merge list of prediction units in that view. The second method improves the view synthesis process: four fields are computed at the level of the left and right reference views using a past and a future temporal reference. These are then warped at the level of the synthesized view and corrected using an epipolar constraint. The four corresponding predictions are then blended together. Both methods bring significant coding gains which confirm the potential of such innovative solutions.
52

B-Spline Based Multitarget Tracking

Sithiravel, Rajiv January 2014 (has links)
Multitarget tracking in the presence of false alarm is a difficult problem to consider. The objective of multitarget tracking is to estimate the number of targets and their states recursively from available observations. At any given time, targets can be born, die and spawn from already existing targets. Sensors can detect these targets with a defined threshold, where normally the observation is influenced by false alarm. Also if the targets are with low signal to noise ratio (SNR) then the targets may not be detected. The Random Finite Set (RFS) filters can be used to solve such multitarget problem efficiently. Specially, one of the best and most widely used RFS based filter is the Probability Hypothesis Density (PHD) filter. The PHD filter approximates the posterior probability density function (PDF) by the first order moment only, where the targets SNR assumed to be much higher. The PHD filter supports targets die, born, spawn and missed-detection by using the well known implementations including Sequential Monte Carlo Probability Hypothesis Density (SMC-PHD) and Gaussian Mixture Probability Hypothesis Density (GM-PHD) methods. The SMC-PHD filter suffers from the well known degeneracy problems while GM-PHD filter may not be suitable for nonlinear and non-Gaussian target tracking problems. It is desirable to have a filter that can provide continuous estimates for any distribution. This is the motivation for the use of B-Splines in this thesis. One of the main focus of the thesis is the B-Spline based PHD (SPHD) filters. The Spline is a well developed theory and been used in academia and industry for more than five decades. The B-Spline can represent any numerical, geometrical and statistical functions and models including the PDF and PHD. The SPHD filter can be applied to linear, nonlinear, Gaussian and non-Gaussian multitarget tracking applications. The SPHD continuity can be maintained by selecting splines with order of three or more, which avoids the degeneracy-related problem. Another important characteristic of the SPHD filter is that the SPHD can be locally controlled, which allow the manipulations of the SPHD and its natural tendency for handling the nonlinear problems. The SPHD filter can be further extended to support maneuvering multitarget tracking, where it can be an alternative to any available PHD filter implementations. The PHD filter does not work well for very low observable (VLO) target tracking problems, where the targets SNR is normally very low. For very low SNR scenarios the PDF must be approximated by higher order moments. Therefore the PHD implementations may not be suitable for the problem considered in this thesis. One of the best estimator to use in VLO target tracking problem is the Maximum-Likelihood Probability Data Association (ML-PDA) algorithm. The standard ML-PDA algorithm is widely used in single target initialization or geolocation problems with high false alarm. The B-Spline is also used in the ML-PDA (SML-PDA) implementations. The SML-PDA algorithm has the capability to determine the global maximum of ML-PDA log-likelihood ratio with high efficiency in terms of state estimates and low computational complexity. For fast passive track initialization, search and rescue operations the SML-PDA algorithm can be used more efficiently compared to the standard ML-PDA algorithm. Also the SML-PDA algorithm with the extension supports the multitarget tracking. / Thesis / Doctor of Philosophy (PhD)
53

Asynchronous Event-Feature Detection and Tracking for SLAM Initialization

Ta, Tai January 2024 (has links)
Traditional cameras are most commonly used in visual SLAM to provide visual information about the scene and positional information about the camera motion. However, in the presence of varying illumination and rapid camera movement, the visual quality captured by traditional cameras diminishes. This limits the applicability of visual SLAM in challenging environments such as search and rescue situations. The emerging event camera has been shown to overcome the limitations of the traditional camera with the event camera's superior temporal resolution and wider dynamic range, opening up new areas of applications and research for event-based SLAM. In this thesis, several asynchronous feature detectors and trackers will be used to initialize SLAM using event camera data. To assess the pose estimation accuracy between the different feature detectors and trackers, the initialization performance was evaluated from datasets captured from various environments. Furthermore, two different methods to align corner-events were evaluated on the datasets to assess the difference. Results show that besides some slight variation in the number of accepted initializations, the alignment methods show no overall difference in any metric. Overall highest performance among the event-based trackers for initialization is HASTE with mostly high pose accuracy and a high number of accepted initializations. However, the performance degrades in featureless scenes. CET on the other hand shows mostly lower performance compared to HASTE.
54

Apprentissage des réseaux de neurones profonds et applications en traitement automatique de la langue naturelle

Glorot, Xavier 11 1900 (has links)
En apprentissage automatique, domaine qui consiste à utiliser des données pour apprendre une solution aux problèmes que nous voulons confier à la machine, le modèle des Réseaux de Neurones Artificiels (ANN) est un outil précieux. Il a été inventé voilà maintenant près de soixante ans, et pourtant, il est encore de nos jours le sujet d'une recherche active. Récemment, avec l'apprentissage profond, il a en effet permis d'améliorer l'état de l'art dans de nombreux champs d'applications comme la vision par ordinateur, le traitement de la parole et le traitement des langues naturelles. La quantité toujours grandissante de données disponibles et les améliorations du matériel informatique ont permis de faciliter l'apprentissage de modèles à haute capacité comme les ANNs profonds. Cependant, des difficultés inhérentes à l'entraînement de tels modèles, comme les minima locaux, ont encore un impact important. L'apprentissage profond vise donc à trouver des solutions, en régularisant ou en facilitant l'optimisation. Le pré-entraînnement non-supervisé, ou la technique du ``Dropout'', en sont des exemples. Les deux premiers travaux présentés dans cette thèse suivent cette ligne de recherche. Le premier étudie les problèmes de gradients diminuants/explosants dans les architectures profondes. Il montre que des choix simples, comme la fonction d'activation ou l'initialisation des poids du réseaux, ont une grande influence. Nous proposons l'initialisation normalisée pour faciliter l'apprentissage. Le second se focalise sur le choix de la fonction d'activation et présente le rectifieur, ou unité rectificatrice linéaire. Cette étude a été la première à mettre l'accent sur les fonctions d'activations linéaires par morceaux pour les réseaux de neurones profonds en apprentissage supervisé. Aujourd'hui, ce type de fonction d'activation est une composante essentielle des réseaux de neurones profonds. Les deux derniers travaux présentés se concentrent sur les applications des ANNs en traitement des langues naturelles. Le premier aborde le sujet de l'adaptation de domaine pour l'analyse de sentiment, en utilisant des Auto-Encodeurs Débruitants. Celui-ci est encore l'état de l'art de nos jours. Le second traite de l'apprentissage de données multi-relationnelles avec un modèle à base d'énergie, pouvant être utilisé pour la tâche de désambiguation de sens. / Machine learning aims to leverage data in order for computers to solve problems of interest. Despite being invented close to sixty years ago, Artificial Neural Networks (ANN) remain an area of active research and a powerful tool. Their resurgence in the context of deep learning has led to dramatic improvements in various domains from computer vision and speech processing to natural language processing. The quantity of available data and the computing power are always increasing, which is desirable to train high capacity models such as deep ANNs. However, some intrinsic learning difficulties, such as local minima, remain problematic. Deep learning aims to find solutions to these problems, either by adding some regularisation or improving optimisation. Unsupervised pre-training or Dropout are examples of such solutions. The two first articles presented in this thesis follow this line of research. The first analyzes the problem of vanishing/exploding gradients in deep architectures. It shows that simple choices, like the activation function or the weights initialization, can have an important impact. We propose the normalized initialization scheme to improve learning. The second focuses on the activation function, where we propose the rectified linear unit. This work was the first to emphasise the use of linear by parts activation functions for deep supervised neural networks, which is now an essential component of such models. The last two papers show some applications of ANNs to Natural Language Processing. The first focuses on the specific subject of domain adaptation in the context of sentiment analysis, using Stacked Denoising Auto-encoders. It remains state of the art to this day. The second tackles learning with multi-relational data using an energy based model which can also be applied to the task of word-sense disambiguation.
55

Un couplage de modèles hydrologique et hydraulique adapté à la modélisation et à la prévision des crues à cinétique rapide – Application au cas du bassin versant du Gardon (France). / A coupling of hydrologic and hydraulic models appropriate for modelling and forecasting fast floods – Application to the Gardon river basin (France).

Laganier, Olivier 29 August 2014 (has links)
Les bassins versants du pourtour méditerranéen français sont touchés par des pluies parfois intenses et à fort cumuls, qui peuvent engendrer des crues à cinétique rapide. Les derniers exemples majeurs en date sont ceux de l’Aude en 1999, du Gard en 2002 et du Var en 2010, dont les conséquences furent dramatiques. Ces travaux de thèse visent à évaluer une approche de modélisation complémentaire aux outils dont disposent déjà les Services de Prévision des Crues pour la prévision des crues à cinétique rapide : le couplage de modèles hydrologique et hydraulique, qui est a priori adapté pour la modélisation à l’échelle des grands bassins méditerranéens, de superficies supérieures à 1 000 km² (Ardèche, Cèze, Gardon, Vidourle…). Des éléments de réponses aux 4 questions suivantes sont recherchés : 1) le couplage est-il adapté pour la modélisation des débits atteints lors d’évènements passés, d’importance intermédiaire ? 2) le couplage est-il performant pour la modélisation des débits, cotes atteintes, et extension d’inondation, observés lors d’un épisode majeur? 3) comment envisager d’améliorer la modélisation des apports latéraux non-jaugés au modèle hydraulique, tout en adoptant une démarche adaptée à la prévision ? 4) le couplage est-il performant en prévision ? Le couplage employé combine le modèle hydrologique SCS-LR de la plateforme ATHYS (Bouvier et al., 2004), et le code de modélisation hydraulique 1D MASCARET (EDF-CETMEF, 2011). Il est appliqué au bassin versant du Gardon (2 040 km²), dans le sud de la France. / The French catchments around the Mediterranean Sea are affected by intense rains, which can cause fast and flash floods. The last major events are the one of the Aude river in 1999, of the Gard area in 2002, and of the Var area in 2010, whose consequences were tragic. This PhD intends to assess a modeling strategy complementary to the tools that are already used by the regional flood warning services: the coupling of hydrologic and hydraulic models, which is a priori well-adapted for the modelling of catchments of large-scale areas (larger than 1 000 km²) around the Mediterranean Sea (such as the ones of the Ardèche river, the Cèze river, the Vidourle river, the Gardon river…). The works aim at bringing elements of responses to the following questions: 1) is the coupling adapted to the modelling of floods hydrographs of past events of moderate importance? 2) in case of an extreme event (like in September 2002), is the coupling effective for the modelling of discharges, of water levels, and of flood extension? 3) how can we improve the modelling of ungauged lateral inflows to the hydraulic model, while applying a method adapted to forecasting? 4) Is the coupling efficient at forecasting? The coupling used combines the SCS-LR hydrologic model of the ATHYS platform (Bouvier et al., 2004), and the MASCARET 1D hydraulic model (EDF-CETMEF, 2011). It is applied to the Gardon river basin (2 040 km²), in the South of France.
56

Design, Implementation and Analysis of a Description Model for Complex Archaeological Objects / Elaboration, mise en œuvre et analyse d’un mod`ele de description d’objets arch´eologiques complexes

Ozturk, Aybuke 09 July 2018 (has links)
La céramique est l'un des matériaux archéologiques les plus importants pour aider à la reconstruction des civilisations passées. Les informations à propos des objets céramiques complexes incluent des données textuelles, numériques et multimédias qui posent plusieurs défis de recherche abordés dans cette thèse. D'un point de vue technique, les bases de données de céramiques présentent différents formats de fichiers, protocoles d'accès et langages d'interrogation. Du point de vue des données, il existe une grande hétérogénéité et les experts ont différentes façons de représenter et de stocker les données. Il n'existe pas de contenu et de terminologie standard, surtout en ce qui concerne la description des céramiques. De plus, la navigation et l'observation des données sont difficiles. L'intégration des données est également complexe en raison de laprésence de différentes dimensions provenant de bases de données distantes, qui décrivent les mêmes catégories d'objets de manières différentes.En conséquence, ce projet de thèse vise à apporter aux archéologues et aux archéomètres des outils qui leur permettent d'enrichir leurs connaissances en combinant différentes informations sur les céramiques. Nous divisons notre travail en deux parties complémentaires : (1) Modélisation de données archéologiques complexes, et (2) Partitionnement de données (clustering) archéologiques complexes. La première partie de cette thèse est consacrée à la conception d'un modèle de données archéologiques complexes pour le stockage des données céramiques. Cette base de donnée alimente également un entrepôt de données permettant des analyses en ligne (OLAP). La deuxième partie de la thèse est consacrée au clustering (catégorisation) des objets céramiques. Pour ce faire, nous proposons une approche floue, dans laquelle un objet céramique peut appartenir à plus d'un cluster (d'une catégorie). Ce type d'approche convient bien à la collaboration avec des experts, enouvrant de nouvelles discussions basées sur les résultats du clustering.Nous contribuons au clustering flou (fuzzy clustering) au sein de trois sous-tâches : (i) une nouvelle méthode d'initialisation des clusters flous qui maintient linéaire la complexité de l'approche ; (ii) un indice de qualité innovant qui permet de trouver le nombre optimal de clusters ; et (iii) l'approche Multiple Clustering Analysis qui établit des liens intelligents entre les données visuelles, textuelles et numériques, ce qui permet de combiner tous les types d'informations sur les céramiques. Par ailleurs, les méthodes que nous proposons pourraient également être adaptées à d'autres domaines d'application tels que l'économie ou la médecine. / Ceramics are one of the most important archaeological materials to help in the reconstruction of past civilizations. Information about complex ceramic objects is composed of textual, numerical and multimedia data, which induce several research challenges addressed in this thesis. From a technical perspective, ceramic databases have different file formats, access protocols and query languages. From a data perspective, ceramic data are heterogeneous and experts have differentways of representing and storing data. There is no standardized content and terminology, especially in terms of description of ceramics. Moreover, data navigation and observation are difficult. Data integration is also difficult due to the presence of various dimensions from distant databases, which describe the same categories of objects in different ways.Therefore, the research project presented in this thesis aims to provide archaeologists and archaeological scientists with tools for enriching their knowledge by combining different information on ceramics. We divide our work into two complementary parts: (1) Modeling of Complex Archaeological Data and (2) Clustering Analysis of Complex Archaeological Data. The first part of this thesis is dedicated to the design of a complex archaeological database model for the storage of ceramic data. This database is also used to source a data warehouse for doing online analytical processing (OLAP). The second part of the thesis is dedicated to an in-depth clustering (categorization) analysis of ceramic objects. To do this, we propose a fuzzy approach, where ceramic objects may belong to more than one cluster (category). Such a fuzzy approach is well suited for collaborating with experts, by opening new discussions based on clustering results.We contribute to fuzzy clustering in three sub-tasks: (i) a novel fuzzy clustering initialization method that keeps the fuzzy approach linear; (ii) an innovative quality index that allows finding the optimal number of clusters; and (iii) the Multiple Clustering Analysis approach that builds smart links between visual, textual and numerical data, which assists in combining all types ofceramic information. Moreover, the methods we propose could also be adapted to other application domains such as economy or medicine.
57

Robuste Lokalisierung von autonomen Fahrzeugen mittels Landmarken

Grünwedel, Sebastian 07 February 2008 (has links)
Die Fahrzeuglokalisierung ist im Bereich der Fahrerassistenzsysteme von entscheidender Bedeutung und Voraussetzung fur verschiedene Anwendungen der Robotik, wie z.B. Navigation oder Kollisionsvermeidung fur fahrerlose Transportsysteme (FTS). In dieser Arbeit wird ein Verfahren zur Lokalisierung mittels Landmarken vorgestellt, die eine Orientierung bezuglich einer Karte ermoglichen. Dabei werden der Erweiterte- Kalman-Filter und der Partikel-Filter fur diese Aufgabe untersucht und verglichen. Ein Schwerpunkt dieser Betrachtungen stellt dabei der Partikel-Filter dar. Die besondere Problematik der Initialisierung wird ausfuhrlich fur beide Filter dargestellt. Simulationen und Versuche zeigen, dass sich der Partikel-Filter fur eine robuste Lokalisierung der Fahrzeugposition verwenden lasst. Im Vergleich dazu kann der Erweiterte-Kalman-Filter nur im begrenzten Maße eingesetzt werden. / The localization of vehicles is of vital importance in the field of driver assistance systems and a requirement of different applications for robotics, i.e. navigation or collision avoidance for automatic guided vehicle systems. In this thesis an approach for localization by means of landmarks is introduced, which enables an orientation regarding a map. The extended Kalman filter and the particle filter are analyzed and compared. The main focus for this consideration is on the particle filter. The problematic for initialization is discussed in detail for both filters. Simulations and tests prove that the particle filter is suitable for robust localization of the vehicle position. Compared to this, the extended Kalman filter can only be used to a certain extend.
58

Řízení dodávky vody v rodinném domě / Control of water supply for a house

Chvátal, Michal January 2021 (has links)
The diploma thesis deals with the design and implementation of the system that will control the water supply for the family house and its garden. The system aslo allows you to store a history that can be viewed via the web interface. The web interface also allows you to set system parameters and monitor the current status.
59

Settling-Time Improvements in Positioning Machines Subject to Nonlinear Friction Using Adaptive Impulse Control

Hakala, Tim 31 January 2006 (has links) (PDF)
A new method of adaptive impulse control is developed to precisely and quickly control the position of machine components subject to friction. Friction dominates the forces affecting fine positioning dynamics. Friction can depend on payload, velocity, step size, path, initial position, temperature, and other variables. Control problems such as steady-state error and limit cycles often arise when applying conventional control techniques to the position control problem. Studies in the last few decades have shown that impulsive control can produce repeatable displacements as small as ten nanometers without limit cycles or steady-state error in machines subject to dry sliding friction. These displacements are achieved through the application of short duration, high intensity pulses. The relationship between pulse duration and displacement is seldom a simple function. The most dependable practical methods for control are self-tuning; they learn from online experience by adapting an internal control parameter until precise position control is achieved. To date, the best known adaptive pulse control methods adapt a single control parameter. While effective, the single parameter methods suffer from sub-optimal settling times and poor parameter convergence. To improve performance while maintaining the capacity for ultimate precision, a new control method referred to as Adaptive Impulse Control (AIC) has been developed. To better fit the nonlinear relationship between pulses and displacements, AIC adaptively tunes a set of parameters. Each parameter affects a different range of displacements. Online updates depend on the residual control error following each pulse, an estimate of pulse sensitivity, and a learning gain. After an update is calculated, it is distributed among the parameters that were used to calculate the most recent pulse. As the stored relationship converges to the actual relationship of the machine, pulses become more accurate and fewer pulses are needed to reach each desired destination. When fewer pulses are needed, settling time improves and efficiency increases. AIC is experimentally compared to conventional PID control and other adaptive pulse control methods on a rotary system with a position measurement resolution of 16000 encoder counts per revolution of the load wheel. The friction in the test system is nonlinear and irregular with a position dependent break-away torque that varies by a factor of more than 1.8 to 1. AIC is shown to improve settling times by as much as a factor of two when compared to other adaptive pulse control methods while maintaining precise control tolerances.
60

Návrh hardwarového šifrovacího modulu / Design of hardware cipher module

Bayer, Tomáš January 2009 (has links)
This diploma’s thesis discourses the cryptographic systems and ciphers, whose function, usage and practical implementation are analysed. In the first chapter basic cryptographic terms, symmetric and asymetric cryptographic algorithms and are mentioned. Also usage and reliability are analysed. Following chapters mention substitution, transposition, block and stream ciphers, which are elementary for most cryptographic algorithms. There are also mentioned the modes, which the ciphers work in. In the fourth chapter are described the principles of some chosen cryptographic algorithms. The objective is to make clear the essence of the algorithms’ behavior. When describing some more difficult algorithms the block scheme is added. At the end of each algorithm’s description the example of practical usage is written. The chapter no. five discusses the hardware implementation. Hardware and software implementation is compared from the practical point of view. Several design instruments are described and different hardware design programming languages with their progress, advantages and disadvantages are mentioned. Chapter six discourses the hardware implementation design of chosen ciphers. Concretely the design of stream cipher with pseudo-random sequence generator is designed in VHDL and also in Matlab. As the second design was chosen the block cipher GOST, which was designed in VHDL too. Both designs were tested and verified and then the results were summarized.

Page generated in 0.1243 seconds