• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 196
  • 53
  • 21
  • 19
  • 8
  • 7
  • 5
  • 4
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 378
  • 378
  • 96
  • 67
  • 66
  • 64
  • 58
  • 51
  • 50
  • 38
  • 37
  • 37
  • 34
  • 34
  • 33
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
231

Algorithmes de références 'robustes' pour la métrologie dimensionnelle des surfaces asphériques et des surfaces complexes en optique / Robust Reference Algorithms for form metrology : Application to aspherical and freeform optics

Arezki, Yassir 05 December 2019 (has links)
Les formes asphériques et les surfaces complexes sont une classe très avancée d'éléments optiques. Leur application a considérablement augmenté au cours des dernières années dans les systèmes d'imagerie, l'astronomie, la lithographie, etc. La métrologie de ces pièces est très difficile, en raison de la grande gamme dynamique d'information acquise et la traçabilité à l'unité SI mètre. Elle devrait faire usage de la norme infinie; (Méthode de zone minimum ou la méthode Min-Max) pour calculer l'enveloppe entourant les points dans le jeu de données en réduisant au minimum la différence entre l'écart maximum et l'écart minimal entre la surface et l'ensemble de données. Cette méthode a une grande complexité en fonction du nombre de points, enplus, les algorithmes impliqués sont non-déterministes. Bien que cette méthode fonctionne pour des géométries simples (lignes, plans, cercles, cylindres, cônes et sphères), elle est encore un défi majeur lorsqu' utilisée pour des géométries complexes (asphérique et surfaces complexes). Par conséquent, l'objectif de la thèse est le développement des algorithmes d'ajustement Min-Max pour les deux surfaces asphériques et complexes, afin de fournir des algorithmes de référence robustes pour la grande communauté impliquée dans ce domaine. Les algorithmes de référence à développer devraient être évalués et validés sur plusieurs données de référence (Softgauges) qui seront générées par la suite. / Aspheres and freeform surfaces are a very challenging class of optical elements. Their application has grown considerably in the last few years in imaging systems, astronomy, lithography, etc. The metrology for aspheres is very challenging, because of the high dynamic range of the acquired information and the traceability to the SI unit meter. Metrology should make use of the infinite norm; (Minimum Zone Method or Min-Max method) to calculate the envelope enclosing the points in the dataset by minimizing the difference between the maximum deviation and the minimum deviation between the surface and the dataset. This method grows in complexity as the number of points in the dataset increases, and the involved algorithms are non-deterministic. Despite the fact that this method works for simple geometries (lines, planes, circles, cylinders, cones and spheres) it is still a major challenge when used on complex geometries (asphere and freeform surfaces). Therefore, the main objective is to address this key challenge about the development of Min-Max fitting algorithms for both aspherical and freeform surfaces as well as least squares fitting algorithms, in order to provide robust reference algorithms for the large community involved in this domain. The reference algorithms to be developed should be evaluated and validated on several reference data (softgauges) that will be generated using reference data generators.
232

DEVELOPMENT OF MULTIMODAL FUSION-BASED VISUAL DATA ANALYTICS FOR ROBOTIC INSPECTION AND CONDITION ASSESSMENT

Tarutal Ghosh Mondal (11775980) 01 December 2021 (has links)
<div>This dissertation broadly focuses on autonomous condition assessment of civil infrastructures using vision-based methods, which present a plausible alternative to existing manual techniques. A region-based convolutional neural network (Faster R-CNN) is exploited for the detection of various earthquake-induced damages in reinforced concrete buildings. Four different damage categories are considered such as surface crack, spalling, spalling with exposed rebars, and severely buckled rebars. The performance of the model is evaluated on image data collected from buildings damaged under several past earthquakes taking place in different parts of the world. The proposed algorithm can be integrated with inspection drones or mobile robotic platforms for quick assessment of damaged buildings leading to expeditious planning of retrofit operations, minimization of damage cost, and timely restoration of essential services. </div><div><br></div><div> </div><div> Besides, a computer vision-based approach is presented to track the evolution of a damage over time by analysing historical visual inspection data. Once a defect is detected in a recent inspection data set, its spatial correspondences in the data collected during previous rounds of inspection are identified leveraging popular computer vision-based techniques. A single reconstructed view is then generated for each inspection round by synthesizing the candidate corresponding images. The chronology of damage thus established facilitates time-based quantification and lucid visual interpretation. This study is likely to enhance the efficiency structural inspection by introducing the time dimension into the autonomous condition assessment pipeline.</div><div><br></div><div> </div><div> Additionally, this dissertation incorporates depth fusion into a CNN-based semantic segmentation model. A 3D animation and visual effect software is exploited to generate a synthetic database of spatially aligned RGB and depth image pairs representing various damage categories which are commonly observed in reinforced concrete buildings. A number of encoding techniques are explored for representing the depth data. Besides, various schemes for fusion of RGB and depth data are investigated to identify the best fusion strategy. It was observed that depth fusion enhances the performance of deep learning-based damage segmentation algorithms significantly. Furthermore, strategies are proposed to manufacture depth information from corresponding RGB frame, which eliminates the need of depth sensing at the time of deployment without compromising on segmentation performance. Overall, the scientific research presented in this dissertation will be a stepping stone towards realizing a fully autonomous structural condition assessment pipeline.</div>
233

Analýza a zefektivnění distribuovaných systémů / Analysis and Improvement of Distributed Systems

Kenyeres, Martin January 2018 (has links)
A significant progress in the evolution of the computer systems and their interconnection over the past 70 years has allowed replacing the frequently used centralized architectures with the highly distributed ones, formed by independent entities fulfilling specific functionalities as one user-intransparent unit. This has resulted in an intense scientic interest in distributed algorithms and their frequent implementation into real systems. Especially, distributed algorithms for multi-sensor data fusion, ensuring an enhanced QoS of executed applications, find a wide usage. This doctoral thesis addresses an optimization and an analysis of the distributed systems, namely the distributed consensus-based algorithms for an aggregate function estimation (primarily, my attention is focused on a mean estimation). The first section is concerned with a theoretical background of the distributed systems, their evolution, their architectures, and a comparison with the centralized systems (i.e. their advantages/disadvantages). The second chapter deals with multi-sensor data fusion, its application, the classification of the distributed estimation techniques, their mathematical modeling, and frequently quoted algorithms for distributed averaging (e.g. protocol Push-Sum, Metropolis-Hastings weights, Best Constant weights etc.). The practical part is focused on mechanisms for an optimization of the distributed systems, the proposal of novel algorithms and complements for the distributed systems, their analysis, and comparative studies in terms of such as the convergence rate, the estimation precision, the robustness, the applicability to real systems etc.
234

Detekce aktuálního podlaží při jízdě výtahem / Floor detection during elevator ride

Havelka, Martin January 2021 (has links)
This diploma thesis deals with the detection of the current floor during elevator ride. This functionality is necessary for robot to move in multi-floor building. For this task, a fusion of accelerometric data during the ride of the elevator and image data obtained from the information display inside the elevator cabin is used. The research describes the already implemented solutions, data fusion methods and image classification options. Based on this part, suitable approaches for solving the problem were proposed. First, datasets from different types of elevator cabins were obtained. An algorithm for working with data from the accelerometric sensor was developed. A convolutional neural network, which was used to classify image data from displays, was selected and trained. Subsequently, the data fusion method was implemented. The individual parts were tested and evaluated. Based on their evaluation, integration into one functional system was performed. System was successfully verified and tested. Result of detection during the ride in different elevators was 97%.
235

Tvorba multispektrálních map v mobilní robotice / Multispectral Map Building in Mobile Robotics

Burian, František January 2015 (has links)
The dissertation deals with utilisation of multispectral optical measurement for data fusion that may be used for visual telepresence and indoor/outdoor mapping by heterogeneous mobile robotic system. Optical proximity sensors, thermal imagers, and tricolour cameras are used for the fusion. The described algorithms are optimised to work in real-time and implemented on CASSANDRA robotic system made by our robotic research group.
236

Tackling pedestrian detection in large scenes with multiple views and representations / Une approche réaliste de la détection de piétons multi-vues et multi-représentations pour des scènes extérieures

Pellicanò, Nicola 21 December 2018 (has links)
La détection et le suivi de piétons sont devenus des thèmes phares en recherche en Vision Artificielle, car ils sont impliqués dans de nombreuses applications. La détection de piétons dans des foules très denses est une extension naturelle de ce domaine de recherche, et l’intérêt croissant pour ce problème est lié aux évènements de grande envergure qui sont, de nos jours, des scenarios à risque d’un point de vue de la sûreté publique. Par ailleurs, les foules très denses soulèvent des problèmes inédits pour la tâche de détection. De par le fait que les caméras ont le champ de vision le plus grand possible pour couvrir au mieux la foule les têtes sont généralement très petites et non texturées. Dans ce manuscrit nous présentons un système complet pour traiter les problèmes de détection et de suivi en présence des difficultés spécifiques à ce contexte. Ce système utilise plusieurs caméras, pour gérer les problèmes de forte occultation. Nous proposons une méthode robuste pour l’estimation de la position relative entre plusieurs caméras dans le cas des environnements requérant une surveillance. Ces environnements soulèvent des problèmes comme la grande distance entre les caméras, le fort changement de perspective, et la pénurie d’information en commun. Nous avons alors proposé d’exploiter le flot vidéo pour effectuer la calibration, avec l’objectif d’obtenir une solution globale de bonne qualité. Nous proposons aussi une méthode non supervisée pour la détection des piétons avec plusieurs caméras, qui exploite la consistance visuelle des pixels à partir des différents points de vue, ce qui nous permet d’effectuer la projection de l’ensemble des détections sur le plan du sol, et donc de passer à un suivi 3D. Dans une troisième partie, nous revenons sur la détection supervisée des piétons dans chaque caméra indépendamment en vue de l’améliorer. L’objectif est alors d’effectuer la segmentation des piétons dans la scène en partant d’une labélisation imprécise des données d’apprentissage, avec des architectures de réseaux profonds. Comme dernière contribution, nous proposons un cadre formel original pour une fusion de données efficace dans des espaces 2D. L’objectif est d’effectuer la fusion entre différents capteurs (détecteurs supervisés en chaque caméra et détecteur non supervisé en multi-vues) sur le plan du sol, qui représente notre cadre de discernement. nous avons proposé une représentation efficace des hypothèses composées qui est invariante au changement de résolution de l’espace de recherche. Avec cette représentation, nous sommes capables de définir des opérateurs de base et des règles de combinaison efficaces pour combiner les fonctions de croyance. Enfin, notre approche de fusion de données a été évaluée à la fois au niveau spatial, c’est à dire en combinant des détecteurs de nature différente, et au niveau temporel, en faisant du suivi évidentiel de piétons sur de scènes à grande échelle dans des conditions de densité variable. / Pedestrian detection and tracking have become important fields in Computer Vision research, due to their implications for many applications, e.g. surveillance, autonomous cars, robotics. Pedestrian detection in high density crowds is a natural extension of such research body. The ability to track each pedestrian independently in a dense crowd has multiple applications: study of human social behavior under high densities; detection of anomalies; large event infrastructure planning. On the other hand, high density crowds introduce novel problems to the detection task. First, clutter and occlusion problems are taken to the extreme, so that only heads are visible, and they are not easily separable from the moving background. Second, heads are usually small (they have a diameter of typically less than ten pixels) and with little or no textures. This comes out from two independent constraints, the need of one camera to have a field of view as high as possible, and the need of anonymization, i.e. the pedestrians must be not identifiable because of privacy concerns.In this work we develop a complete framework in order to handle the pedestrian detection and tracking problems under the presence of the novel difficulties that they introduce, by using multiple cameras, in order to implicitly handle the high occlusion issues.As a first contribution, we propose a robust method for camera pose estimation in surveillance environments. We handle problems as high distances between cameras, large perspective variations, and scarcity of matching information, by exploiting an entire video stream to perform the calibration, in such a way that it exhibits fast convergence to a good solution. Moreover, we are concerned not only with a global fitness of the solution, but also with reaching low local errors.As a second contribution, we propose an unsupervised multiple camera detection method which exploits the visual consistency of pixels between multiple views in order to estimate the presence of a pedestrian. After a fully automatic metric registration of the scene, one is capable of jointly estimating the presence of a pedestrian and its height, allowing for the projection of detections on a common ground plane, and thus allowing for 3D tracking, which can be much more robust with respect to image space based tracking.In the third part, we study different methods in order to perform supervised pedestrian detection on single views. Specifically, we aim to build a dense pedestrian segmentation of the scene starting from spatially imprecise labeling of data, i.e. heads centers instead of full head contours, since their extraction is unfeasible in a dense crowd. Most notably, deep architectures for semantic segmentation are studied and adapted to the problem of small head detection in cluttered environments.As last but not least contribution, we propose a novel framework in order to perform efficient information fusion in 2D spaces. The final aim is to perform multiple sensor fusion (supervised detectors on each view, and an unsupervised detector on multiple views) at ground plane level, that is, thus, our discernment frame. Since the space complexity of such discernment frame is very large, we propose an efficient compound hypothesis representation which has been shown to be invariant to the scale of the search space. Through such representation, we are capable of defining efficient basic operators and combination rules of Belief Function Theory. Furthermore, we propose a complementary graph based description of the relationships between compound hypotheses (i.e. intersections and inclusion), in order to perform efficient algorithms for, e.g. high level decision making.Finally, we demonstrate our information fusion approach both at a spatial level, i.e. between detectors of different natures, and at a temporal level, by performing evidential tracking of pedestrians on real large scale scenes in sparse and dense conditions.
237

Data Fusion in Spatial Data Infrastructures

Wiemann, Stefan 28 October 2016 (has links)
Over the past decade, the public awareness and availability as well as methods for the creation and use of spatial data on the Web have steadily increased. Besides the establishment of governmental Spatial Data Infrastructures (SDIs), numerous volunteered and commercial initiatives had a major impact on that development. Nevertheless, data isolation still poses a major challenge. Whereas the majority of approaches focuses on data provision, means to dynamically link and combine spatial data from distributed, often heterogeneous data sources in an ad hoc manner are still very limited. However, such capabilities are essential to support and enhance information retrieval for comprehensive spatial decision making. To facilitate spatial data fusion in current SDIs, this thesis has two main objectives. First, it focuses on the conceptualization of a service-based fusion process to functionally extend current SDI and to allow for the combination of spatial data from different spatial data services. It mainly addresses the decomposition of the fusion process into well-defined and reusable functional building blocks and their implementation as services, which can be used to dynamically compose meaningful application-specific processing workflows. Moreover, geoprocessing patterns, i.e. service chains that are commonly used to solve certain fusion subtasks, are designed to simplify and automate workflow composition. Second, the thesis deals with the determination, description and exploitation of spatial data relations, which play a decisive role for spatial data fusion. The approach adopted is based on the Linked Data paradigm and therefore bridges SDI and Semantic Web developments. Whereas the original spatial data remains within SDI structures, relations between those sources can be used to infer spatial information by means of Semantic Web standards and software tools. A number of use cases were developed, implemented and evaluated to underpin the proposed concepts. Particular emphasis was put on the use of established open standards to realize an interoperable, transparent and extensible spatial data fusion process and to support the formalized description of spatial data relations. The developed software, which is based on a modular architecture, is available online as open source. It allows for the development and seamless integration of new functionality as well as the use of external data and processing services during workflow composition on the Web. / Die Entwicklung des Internet im Laufe des letzten Jahrzehnts hat die Verfügbarkeit und öffentliche Wahrnehmung von Geodaten, sowie Möglichkeiten zu deren Erfassung und Nutzung, wesentlich verbessert. Dies liegt sowohl an der Etablierung amtlicher Geodateninfrastrukturen (GDI), als auch an der steigenden Anzahl Communitybasierter und kommerzieller Angebote. Da der Fokus zumeist auf der Bereitstellung von Geodaten liegt, gibt es jedoch kaum Möglichkeiten die Menge an, über das Internet verteilten, Datensätzen ad hoc zu verlinken und zusammenzuführen, was mitunter zur Isolation von Geodatenbeständen führt. Möglichkeiten zu deren Fusion sind allerdings essentiell, um Informationen zur Entscheidungsunterstützung in Bezug auf raum-zeitliche Fragestellungen zu extrahieren. Um eine ad hoc Fusion von Geodaten im Internet zu ermöglichen, behandelt diese Arbeit zwei Themenschwerpunkte. Zunächst wird eine dienstebasierten Umsetzung des Fusionsprozesses konzipiert, um bestehende GDI funktional zu erweitern. Dafür werden wohldefinierte, wiederverwendbare Funktionsblöcke beschrieben und über standardisierte Diensteschnittstellen bereitgestellt. Dies ermöglicht eine dynamische Komposition anwendungsbezogener Fusionsprozesse über das Internet. Des weiteren werden Geoprozessierungspatterns definiert, um populäre und häufig eingesetzte Diensteketten zur Bewältigung bestimmter Teilaufgaben der Geodatenfusion zu beschreiben und die Komposition und Automatisierung von Fusionsprozessen zu vereinfachen. Als zweiten Schwerpunkt beschäftigt sich die Arbeit mit der Frage, wie Relationen zwischen Geodatenbeständen im Internet erstellt, beschrieben und genutzt werden können. Der gewählte Ansatz basiert auf Linked Data Prinzipien und schlägt eine Brücke zwischen diensteorientierten GDI und dem Semantic Web. Während somit Geodaten in bestehenden GDI verbleiben, können Werkzeuge und Standards des Semantic Web genutzt werden, um Informationen aus den ermittelten Geodatenrelationen abzuleiten. Zur Überprüfung der entwickelten Konzepte wurde eine Reihe von Anwendungsfällen konzipiert und mit Hilfe einer prototypischen Implementierung umgesetzt und anschließend evaluiert. Der Schwerpunkt lag dabei auf einer interoperablen, transparenten und erweiterbaren Umsetzung dienstebasierter Fusionsprozesse, sowie einer formalisierten Beschreibung von Datenrelationen, unter Nutzung offener und etablierter Standards. Die Software folgt einer modularen Struktur und ist als Open Source frei verfügbar. Sie erlaubt sowohl die Entwicklung neuer Funktionalität durch Entwickler als auch die Einbindung existierender Daten- und Prozessierungsdienste während der Komposition eines Fusionsprozesses.
238

Automatizované odvození geometrie jízdních pruhů na základě leteckých snímků a existujících prostorových dat / Automatic detection of driving lanes geometry based on aerial images and existing spatial data

Růžička, Jakub January 2020 (has links)
The aim of the thesis is to develop a method to identify driving lanes based on aerial images and existing spatial data. The proposed method uses up to date available data in which it identifies road surface marking (RSM). Polygons classified as RSM are further processed to obtain their vector line representation as the first partial result. While processing RSM vectors further, borders of driving lanes are modelled as the second partial result. Furthermore, attempts were done to be able to automatically distinguish between solid and broken lines for a higher amount of information contained in the resulting dataset. Proposed algorithms were tested in 20 case study areas and results are presented further in this thesis. The overall correctness as well as the positional accuracy proves effectivity of the method. However, several shortcomings were identified and are discussed as well as possible solutions for them are suggested. The text is accompanied by more than 70 figures to offer a clear perspective on the topic. The thesis is organised as follows: First, Introduction and Literature review are presented including the problem background, author's motivation, state of the art and contribution of the thesis. Secondly, technical and legal requirements of RSM are presented as well as theoretical concepts and...
239

Radar and Optical Data Fusion for Object Based Urban Land Cover Mapping / Radar och optisk datafusion för objektbaserad kartering av urbant marktäcke

Jacob, Alexander January 2011 (has links)
The creation and classification of segments for object based urban land cover mapping is the key goal of this master thesis. An algorithm based on region growing and merging was developed, implemented and tested. The synergy effects of a fused data set of SAR and optical imagery were evaluated based on the classification results. The testing was mainly performed with data of the city of Beijing China. The dataset consists of SAR and optical data and the classified land cover/use maps were evaluated using standard methods for accuracy assessment like confusion matrices, kappa values and overall accuracy. The classification for the testing consists of 9 classes which are low density buildup, high density buildup, road, park, water, golf course, forest, agricultural crop and airport. The development was performed in JAVA and a suitable graphical interface for user friendly interaction was created parallel to the development of the algorithm. This was really useful during the period of extensive testing of the parameter which easily could be entered through the dialogs of the interface. The algorithm itself treats the pixels as a connected graph of pixels which can always merge with their direct neighbors, meaning sharing an edge with those. There are three criteria that can be used in the current state of the algorithm, a mean based spectral homogeneity measure, a variance based textural homogeneity measure and fragmentation test as a shape measure. The algorithm has 3 key parameters which are the minimum and maximum segments size as well as a homogeneity threshold measure which is based on a weighted combination of relative change due to merging two segments. The growing and merging is divided into two phases the first one is based on mutual best partner merging and the second one on the homogeneity threshold. In both phases it is possible to use all three criteria for merging in arbitrary weighting constellations. A third step is the check for the fulfillment of minimum size which can be performed prior to or after the other two steps. The segments can then in a supervised manner be labeled interactively using once again the graphical user interface for creating a training sample set. This training set can be used to derive a support vector machine which is based on a radial base function kernel. The optimal settings for the required parameters of this SVM training process can be found from a cross-validation grid search process which is implemented within the program as well. The SVM algorithm is based on the LibSVM java implementation. Once training is completed the SVM can be used to predict the whole dataset to get a classified land-cover map. It can be exported in form of a vector dataset. The results yield that the incorporation of texture features already in the segmentation is superior to spectral information alone especially when working with unfiltered SAR data. The incorporation of the suggested shape feature however doesn’t seem to be of advantage, especially when taking the much longer processing time into account, when incorporating this criterion. From the classification results it is also evident, that the fusion of SAR and optical data is beneficial for urban land cover mapping. Especially the distinction of urban areas and agricultural crops has been improved greatly but also the confusion between high and low density could be reduced due to the fusion. / Dragon 2 Project
240

Spatio-temporal Analysis of Urban Heat Island and Heat Wave Evolution using Time-series Remote Sensing Images: Method and Applications

Yang, Bo 11 June 2019 (has links)
No description available.

Page generated in 0.0923 seconds