• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 195
  • 53
  • 21
  • 19
  • 8
  • 7
  • 5
  • 4
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 377
  • 377
  • 96
  • 67
  • 66
  • 64
  • 58
  • 51
  • 50
  • 38
  • 37
  • 37
  • 34
  • 34
  • 33
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Assessing the operational value of situational awareness for AEGIS and Ship Self Defense System (SSDS) platforms through the application of the Knowledge Value Added (KVA) methodology

Uchytil, Joseph. 06 1900 (has links)
As the United States Navy strives to attain a myriad of situational awareness systems that provide the functionality and interoperability required for future missions, the fundamental idea of open architecture is beginning to promulgate throughout the Department. In order to make rational, informed decisions concerning the processes and systems that will be integrated to provide this situational awareness, an analytical method must be used to identify process deficiencies and produce quantifiable measurement indicators. This thesis will apply the Knowledge Value Added methodology to the current processes involved in track management aboard the AEGIS and Ship Self Defense System (SSDS) platforms. Additional analysis will be conducted based on notional changes that could occur were the systems designed using an open architecture approach. A valuation based on knowledge assets will be presented in order to.
42

Développement d'algorithmes de métrologie dédiés à la caractérisation de nano-objets à partir d'informations hétérogènes / Development of nano-object characterization algorithms from heterogeneous data

Derville, Alexandre 20 December 2018 (has links)
Ces travaux de thèse s’inscrivent dans le contexte technico/économique des nanomatériaux notamment les nanoparticules et les copolymères. Aujourd’hui, une révolution technologique est en cours avec l’introduction de ces matériaux dans des matrices plus ou moins complexes présentes dans notre quotidien (santé, cosmétique, bâtiment, agroalimentaire...). Ces matériaux confèrent à ces produits des propriétés uniques (mécanique, électrique, chimique, thermique, ...). Cette omniprésence associée aux enjeux économiques engendre deux problématiques liées au contrôle des procédés de fabrication et à la métrologie associée. La première est de garantir une traçabilité de ces nanomatériaux afin de prévenir tout risque sanitaire et environnemental et la seconde est d’optimiser le développement des procédés afin de pérenniser des filières économiques rentables. Pour cela, les deux techniques les plus courantes de métrologie utilisées sont : la microscopie électronique à balayage (MEB) et la microscopie à force atomique (AFM).Le premier volet des travaux est consacré au développement d’une méthodologie de fusion de données permettant d’analyser automatiquement les données en provenance de chaque microscope et d’utiliser leurs points forts respectifs afin de réduire les incertitudes de mesure en trois dimensions. Une première partie a été consacrée à la correction d’un défaut majeur d’asservissement de l’AFM qui génère des dérives et/ou sauts dans les signaux. Nous présentons une technique dirigée par les données permettant une correction de ces signaux. La méthode présentée a l’avantage de ne pas faire d’hypothèses sur les objets et leurs positions. Elle peut être utilisée en routine automatique pour l’amélioration du signal avant l’analyse des objets.La deuxième partie est consacrée au développement d’une méthode d’analyse automatique des images de nanoparticules sphériques en provenance d’un AFM ou d’un MEB. Dans le but de développer une traçabilité en 3D, il est nécessaire d’identifier et de mesurer les nanoparticules identiques qui ont été mesurées à la fois sur l’AFM et sur le MEB. Afin d’obtenir deux estimations du diamètre sur la même particule physique, nous avons développé une technique qui permet de mettre en correspondance les particules. Partant des estimations pour les deux types de microscopie, avec des particules présentes dans les deux types d'images ou non, nous présentons une technique qui permet l'agrégation d’estimateurs sur les populations de diamètres afin d'obtenir une valeur plus fiable des propriétés du diamètre des particules.Le second volet de cette thèse est dédié à l’optimisation d’un procédé de fabrication de copolymères à blocs (structures lamellaires) afin d’exploiter toutes les grandeurs caractéristiques utilisées pour la validation du procédé (largeur de ligne, période, rugosité, taux de défauts) notamment à partir d’images MEB afin de les mettre en correspondance avec un ensemble de paramètres de procédé. En effet, lors du développement d’un nouveau procédé, un plan d’expériences est effectué. L’analyse de ce dernier permet d’estimer manuellement une fenêtre de procédé plus ou moins précise (estimation liée à l’expertise de l’ingénieur matériaux). L’étape est réitérée jusqu’à l’obtention des caractéristiques souhaitées. Afin d’accélérer le développement, nous avons étudié une façon de prédire le résultat du procédé de fabrication sur l’espace des paramètres. Pour cela, nous avons étudié différentes techniques de régression que nous présentons afin de proposer une méthodologie automatique d’optimisation des paramètres d’un procédé alimentée par les caractéristiques d’images AFM et/ou MEB.Ces travaux d’agrégations d’estimateurs et d’optimisation de fenêtre de procédés permettent d’envisager le développement d’une standardisation d’analyse automatique de données issues de MEB et d’AFM en vue du développement d’une norme de traçabilité des nanomatériaux. / This thesis is included in the technical and economical context of nanomaterials, more specifically nanoparticles and block copolymer. Today, we observe a technological revolution with the introduction of these materials into matrices more or less complex present in our daily lives (health, cosmetics, buildings, food ...). These materials yield unique properties to these products (mechanical, electrical, chemical, thermal ...). This omnipresence associated with the economic stakes generates two problems related to the process control and associated metrology. The first is to ensure traceability of these nanomaterials in order to prevent any health and environmental risks and the second is to optimize the development of processes in order to sustain profitable economic sectors. For this, the two most common metrology techniques used are: scanning electron microscopy (SEM) and atomic force microscopy (AFM).The first phase of the work is devoted to the development of a data fusion methodology that automatically analyzes data from each microscope and uses their respective strengths to reduce measurement uncertainties in three dimensions. A first part was dedicated to the correction of a major defect of the AFM which generates drifts and / or jumps in the signals. We present a data-driven methodology, fast to implement and which accurately corrects these deviations. The proposed methodology makes no assumption on the object locations and can therefore be used as an efficient preprocessing routine for signal enhancement before object analysis.The second part is dedicated to the development of a method for automatic analysis of spherical nanoparticle images coming from an AFM or a SEM. In order to develop 3D traceability, it is mandatory to identify and measure the identical nanoparticles that have been measured on both AFM and SEM. In order to obtain two estimations of the diameter on the same physical particle, we developed a technique that allows to match the particles. Starting from estimates for both types of microscopy, with particles present in both kinds of images or not, we present a technique that allows the aggregation of estimators on diameter populations in order to obtain a more reliable value of properties of the particle diameter.The second phase of this thesis is dedicated to the optimization of a block copolymer process (lamellar structures) in order to capitalize on all the characteristic quantities used for the validation of the process (line width, period, roughness, defects rate) in particular from SEM images for the purpose of matching them with a set of process parameters.Indeed, during the development of a new process, an experimental plan is carried out. The analysis of the latter makes it possible to manually estimate a more or less precise process window (estimate related to the expertise of the materials engineer). The step is reiterated until the desired characteristics are obtained. In order to accelerate the development, we have studied a way of predicting the result of the process on the parameter space. For this, we studied different regression techniques that we present to propose an automatic methodology for optimizing the parameters of a process powered by AFM and / or SEM image characteristics.This work of estimator aggregation and process window optimization makes it possible to consider the development of a standardization of automatic analysis of SEM and AFM data for the development of a standard for traceability of nanomaterials.
43

Binocular geometry and camera motion directly from normal flows. / CUHK electronic theses & dissertations collection

January 2009 (has links)
Active vision systems are about mobile platform equipped with one or more than one cameras. They perceive what happens in their surroundings from the image streams the cameras grab. Such systems have a few fundamental tasks to tackle---they need to determine from time to time what their motion in space is, and should they have multiple cameras, they need to know how the cameras are relatively positioned so that visual information collected by the respective cameras can be related. In the simplest form, the tasks are about finding the motion of a camera, and finding the relative geometry of every two cameras, from the image streams the cameras collect. / On determining the ego-motion of a camera, there have been many previous works as well. However, again, most of the works require to track distinct features in the image stream or to infer the full optical flow field from the normal flow field. Different from the traditional works, utilizing no motion correspondence nor the epipolar geometry, a new method is developed that operates again on the normal flow data directly. The method has a number of features. It can employ the use of every normal flow data, thus requiring less texture from the image scene. A novel formulation of what the normal flow direction at an image position has to offer on the camera motion is given, and this formulation allows a locus of the possible camera motion be outlined from every data point. With enough data points or normal flows over the image domain, a simple voting scheme would allow the various loci intersect and pinpoint the camera motion. / On determining the relative geometry of two cameras, there already exist a number of calibration techniques in the literature. They are based on the presence of either some specific calibration objects in the imaged scene, or a portion of the scene that is observable by both cameras. However, in active vision, because of the "active" nature of the cameras, it could happen that a camera pair do not share much or anything in common in their visual fields. In the first part of this thesis, we propose a new solution method to the problem. The method demands image data under a rigid motion of the camera pair, but unlike the existing motion correspondence-based calibration methods it does not estimate the optical flows or motion correspondences explicitly. Instead it estimates the inter-camera geometry from the monocular normal flows. Moreover, we propose a strategy on selecting optimal groups of normal flow vectors to improve the accuracy and efficiency of the estimation. / The relative motion between a camera and the imaged environment generally induces a flow field in the image stream captured by the camera. The flow field, which is about motion correspondences of the various image positions over the image frames, is referred to as the optical flows in the literature. If the optical flow field of every camera can be made available, the motion of a camera can be readily determined, and so can the relative geometry of two cameras. However, due to the well-known aperture problem, directly observable at any image position is generally not the full optical flow, but only the component of it that is normal to the iso-brightness contour of the intensity profile at the position. The component is widely referred to as the normal flow. It is not impossible to infer the full flow field from the normal flow field, but then it requires some specific assumptions about the imaged scene, like it is smooth almost everywhere etc. / This thesis aims at exploring how the above two fundamental tasks can be tackled by operating on the normal flow field directly. The objective is, without the full flow inferred explicitly in the process, and in turn no specific assumption made about the imaged scene, the developed methods can be applicable to a wider set of scenes. The thesis consists of two parts. The first part is about how the inter-camera geometry of two cameras can be determined from the two monocular normal flow fields. The second part is about how a camera's ego-motion can be determined by examining only the normal flows the camera observes. / We have tested the methods on both synthetic image data and real image sequences. Experimental results show that the developed methods are effective in determining inter-camera geometry and camera motion from normal flow fields. / Yuan, Ding. / Adviser: Ronald Chung. / Source: Dissertation Abstracts International, Volume: 70-09, Section: B, page: . / Thesis submitted in: October 2008. / Thesis (Ph.D.)--Chinese University of Hong Kong, 2009. / Includes bibliographical references (leaves 121-131). / Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Electronic reproduction. [Ann Arbor, MI] : ProQuest Information and Learning, [200-] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Abstracts in English and Chinese. / School code: 1307.
44

Distress situation identification by multimodal data fusion for home healthcare telemonitoring / Identification de situation de détresse par la fusion de données multimodales pour la télévigilance médicale à domicile

Medjahed, Hamid 19 January 2010 (has links)
Aujourd'hui, la proportion des personnes âgées devient importante par rapport à l'ensemble de la population, et les capacités d'admission dans les hôpitaux sont limitées. En conséquence, plusieurs systèmes de télévigilance médicale ont été développés, mais il existe peu de solutions commerciales. Ces systèmes se concentrent soit sur la mise en oeuvre d’une architecture générique pour l'intégration des systèmes d'information médicale, soit sur l'amélioration de la vie quotidienne des patients en utilisant divers dispositifs automatiques avec alarme, soit sur l’offre de services de soins aux patients souffrant de certaines maladies comme l'asthme, le diabète, les problèmes cardiaques ou pulmonaires, ou la maladie d'Alzheimer. Dans ce contexte, un système automatique pour la télévigilance médicale à domicile est une solution pour faire face à ces problèmes et ainsi permettre aux personnes âgées de vivre en toute sécurité et en toute indépendance à leur domicile. Dans cette thèse, qui s’inscrit dans le cadre de la télévigilance médicale, un nouveau système de télévigilance médicale à plusieurs modalités nommé EMUTEM (Environnement Multimodale pour la Télévigilance Médicale) est présenté. Il combine et synchronise plusieurs modalités ou capteurs, grâce à une technique de fusion de données multimodale basée sur la logique floue. Ce système peut assurer une surveillance continue de la santé des personnes âgées. L'originalité de ce système avec la nouvelle approche de fusion est sa flexibilité à combiner plusieurs modalités de télévigilance médicale. Il offre un grand bénéfice aux personnes âgées en surveillant en permanence leur état de santé et en détectant d’éventuelles situations de détresse. / The population age increases in all societies throughout the world. In Europe, for example, the life expectancy for men is about 71 years and for women about 79 years. For North America the life expectancy, currently is about 75 for men and 81 for women. Moreover, the elderly prefer to preserve their independence, autonomy and way of life living at home the longest time possible. The current healthcare infrastructures in these countries are widely considered to be inadequate to meet the needs of an increasingly older population. Home healthcare monitoring is a solution to deal with this problem and to ensure that elderly people can live safely and independently in their own homes for as long as possible. Automatic in-home healthcare monitoring is a technological approach which helps people age in place by continuously telemonitoring. In this thesis, we explore automatic in-home healthcare monitoring by conducting a study of professionals who currently perform in-home healthcare monitoring, by combining and synchronizing various telemonitoring modalities,under a data synchronization and multimodal data fusion platform, FL-EMUTEM (Fuzzy Logic Multimodal Environment for Medical Remote Monitoring). This platform incorporates algorithms that process each modality and providing a technique of multimodal data fusion which can ensures a pervasive in-home health monitoring for elderly people based on fuzzy logic.The originality of this thesis which is the combination of various modalities in the home, about its inhabitant and their surroundings, will constitute an interesting benefit and impact for the elderly person suffering from loneliness. This work complements the stationary smart home environment in bringing to bear its capability for integrative continuous observation and detection of critical situations.
45

Multiple Target Tracking in Realistic Environments Using Recursive-RANSAC in a Data Fusion Framework

Millard, Jeffrey Dyke 01 December 2017 (has links)
Reliable track continuity is an important characteristic of multiple target tracking (MTT) algorithms. In the specific case of visually tracking multiple ground targets from an aerial platform, challenges arise due to realistic operating environments such as video compression artifacts, unmodeled camera vibration, and general imperfections in the target detection algorithm. Some popular visual detection techniques include Kanade-Lucas-Tomasi (KLT)-based motion detection, difference imaging, and object feature matching. Each of these algorithmic detectors has fundamental limitations in regard to providing consistent measurements. In this thesis we present a scalable detection framework that simultaneously leverages multiple measurement sources. We present the recursive random sample consensus (R-RANSAC) algorithm in a data fusion architecture that accommodates multiple measurement sources. Robust track continuity and real-time performance are demonstrated with post-processed flight data and a hardware demonstration in which the aircraft performs automated target following. Applications involving autonomous tracking of ground targets occasionally encounter situations where semantic information about targets would improve performance. This thesis also presents an autonomous target labeling framework that leverages cloud-based image classification services to classify targets that are tracked by the R-RANSAC MTT algorithm. The communication is managed by a Python robot operating system (ROS) node that accounts for latency and filters the results over time. This thesis articulates the feasibility of this approach and suggests hardware improvements that would yield reliable results. Finally, this thesis presents a framework for image-based target recognition to address the problem of tracking targets that become occluded for extended periods of time. This is done by collecting descriptors of targets tracked by R-RANSAC. Before new tracks are assigned an ID, an attempt to match visual information with historical tracks is triggered. The concept is demonstrated in a simulation environment with a single target, using template-based target descriptors. This contribution provides a framework for improving track reliability when faced with target occlusions.
46

Optimizing Shipping Container Damage Prediction and Maritime Vessel Service Time in Commercial Maritime Ports Through High Level Information Fusion

Panchapakesan, Ashwin 09 September 2019 (has links)
The overwhelming majority of global trade is executed over maritime infrastructure, and port-side optimization problems are significant given that commercial maritime ports are hubs at which sea trade routes and land/rail trade routes converge. Therefore, optimizing maritime operations brings the promise of improvements with global impact. Major performance bottlenecks in maritime trade process include the handling of insurance claims on shipping containers and vessel service time at port. The former has high input dimensionality and includes data pertaining to environmental and human attributes, as well as operational attributes such as the weight balance of a shipping container; and therefore lends itself to multiple classification method- ologies, many of which are explored in this work. In order to compare their performance, a first-of-its-kind dataset was developed with carefully curated attributes. The performance of these methodologies was improved by exploring metalearning techniques to improve the collective performance of a subset of these classifiers. The latter problem formulated as a schedule optimization, solved with a fuzzy system to control port-side resource deployment; whose parameters are optimized by a multi-objective evolutionary algorithm which outperforms current industry practice (as mined from real-world data). This methodology has been applied to multiple ports across the globe to demonstrate its generalizability, and improves upon current industry practice even with synthetically increased vessel traffic.
47

A Framework for the Creation of a Unified Electronic Medical Record Using Biometrics, Data Fusion and Belief Theory

Leonard, Dwayne Christopher 13 December 2007 (has links)
The technology exists for the migration of healthcare data from its archaic paper-based system to an electronic one and once in digital form, to be transported anywhere in the world in a matter of seconds. The advent of universally accessible healthcare data benefits all participants, but one of the outstanding problems that must be addressed is how to uniquely identify and link a patient to his or her specific medical data. To date, a few solutions to this problem have been proposed that are limited in their effectiveness. We propose the use of biometric technology within our FIRD framework in solving the unique association of a patient to his or her medical data distinctively. This would allow a patient to have real time access to all of his or her recorded healthcare information electronically whenever it is necessary, securely with minimal effort, greater effectiveness, and ease.
48

Performance Evaluation of Time Syncrhonization and Clock Drift Compensation in Wireless Personal Area Network

Wåhslén, Jonas, Orhan, Ibrahim, Sturm, Dennis, Lindh, Thomas January 2012 (has links)
Efficient algorithms for time synchronization, including compensation for clock drift, are essential in order to obtain reliable fusion of data samples from multiple wireless sensor nodes. This paper evaluates the performance of algorithms based on three different approaches; one that synchronizes the local clocks on the sensor nodes, and a second that uses a single clock on the receiving node (e.g. a mobile phone), and a third that uses broadcast messages. The performances of the synchronization algorithms are evaluated in wireless personal area networks, especially Bluetooth piconets and ZigBee/IEEE 802.15.4 networks. A new approach for compensation of clock drift and a realtime implementation of single node synchronization from the mobile phone are presented and tested. Finally, applications of data fusion and time synchronization are shown in two different use cases; a kayaking sports case, and monitoring of heart and respiration of prematurely born infants. / <p>QC 20130605</p>
49

Solar Energy Potential Analysis at Building Scale Using LiDAR and Satellite Data

Aguayo, Paula 23 May 2013 (has links)
The two main challenges of the twenty-first century are the scarcity of energy sources and global warming; trigged by the emission of greenhouse gases. In this context, solar energy became increasingly relevant. Because it makes optimal use of the resources, minimizes environmental impacts, and is sustainable over time. However, before installing solar panels, it is convenient pre-assessing the amount of energy that a building can harvest. This study proposes a methodology to semi-automatically generate information a building scale; on a large area. This thesis integrates airborne Light Detection and Ranging (LiDAR) and WoldView-2 satellite data for modelling the solar energy potential of building rooftops in San Francisco, California. The methodology involved building detection solar potential analysis, and estimations at building scale. First, the outline of building rooftops is extracted using an object-based approach. Next, the solar modelling is carried out using the solar radiation analysis tool in ArcGIS, Spatial Analyst. Then, energy that could potentially be harvested by each building rooftop is estimated. The energy estimation is defined in economic and environmental terms.
50

Distributed Random Set Theoretic Soft/Hard Data Fusion

Khaleghi, Bahador January 2012 (has links)
Research on multisensor data fusion aims at providing the enabling technology to combine information from several sources in order to form a unifi ed picture. The literature work on fusion of conventional data provided by non-human (hard) sensors is vast and well-established. In comparison to conventional fusion systems where input data are generated by calibrated electronic sensor systems with well-defi ned characteristics, research on soft data fusion considers combining human-based data expressed preferably in unconstrained natural language form. Fusion of soft and hard data is even more challenging, yet necessary in some applications, and has received little attention in the past. Due to being a rather new area of research, soft/hard data fusion is still in a edging stage with even its challenging problems yet to be adequately de fined and explored. This dissertation develops a framework to enable fusion of both soft and hard data with the Random Set (RS) theory as the underlying mathematical foundation. Random set theory is an emerging theory within the data fusion community that, due to its powerful representational and computational capabilities, is gaining more and more attention among the data fusion researchers. Motivated by the unique characteristics of the random set theory and the main challenge of soft/hard data fusion systems, i.e. the need for a unifying framework capable of processing both unconventional soft data and conventional hard data, this dissertation argues in favor of a random set theoretic approach as the first step towards realizing a soft/hard data fusion framework. Several challenging problems related to soft/hard fusion systems are addressed in the proposed framework. First, an extension of the well-known Kalman lter within random set theory, called Kalman evidential filter (KEF), is adopted as a common data processing framework for both soft and hard data. Second, a novel ontology (syntax+semantics) is developed to allow for modeling soft (human-generated) data assuming target tracking as the application. Third, as soft/hard data fusion is mostly aimed at large networks of information processing, a new approach is proposed to enable distributed estimation of soft, as well as hard data, addressing the scalability requirement of such fusion systems. Fourth, a method for modeling trust in the human agents is developed, which enables the fusion system to protect itself from erroneous/misleading soft data through discounting such data on-the-fly. Fifth, leveraging the recent developments in the RS theoretic data fusion literature a novel soft data association algorithm is developed and deployed to extend the proposed target tracking framework into multi-target tracking case. Finally, the multi-target tracking framework is complemented by introducing a distributed classi fication approach applicable to target classes described with soft human-generated data. In addition, this dissertation presents a novel data-centric taxonomy of data fusion methodologies. In particular, several categories of fusion algorithms have been identifi ed and discussed based on the data-related challenging aspect(s) addressed. It is intended to provide the reader with a generic and comprehensive view of the contemporary data fusion literature, which could also serve as a reference for data fusion practitioners by providing them with conducive design guidelines, in terms of algorithm choice, regarding the specifi c data-related challenges expected in a given application.

Page generated in 0.0803 seconds