• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 191
  • 53
  • 19
  • 18
  • 8
  • 5
  • 4
  • 4
  • 3
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 357
  • 357
  • 96
  • 65
  • 64
  • 61
  • 52
  • 50
  • 50
  • 36
  • 35
  • 35
  • 34
  • 33
  • 32
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Generic support for decision-making in management and command and control

Wallenius, Klas January 2004 (has links)
<p>Flexibility is the keyword when preparing for the uncertainfuture tasks for the civilian and military defence. Supporttools relying on general principles will greatlyfacilitateflexible co-ordination and co-operation between differentcivilian and military organizations, and also between differentcommand levels. Further motivations for general solutionsinclude reduced costs for technical development and training,as well as faster and more informed decisionmaking. Mosttechnical systems that support military activities are howeverdesigned with specific work tasks in mind, and are consequentlyrather inflexible. There are large differences between forinstance fire fighting, disaster relief, calculating missiletrajectories, and navigating large battle-ships. Still, thereought to be much in common in the work of managing thesevarious tasks. We use the term<i>Command and Control</i>(C2) to capture these commonfeatures in management of civilian and military, rescue anddefence operations.</p><p>Consequently, this thesis describes a top-down approach tosupport systems for decision-making in the context of C2, as acomplement to the prevailing bottom-up approaches. DISCCO(Decision Support for Command and Control) is a set ofnetwork-based services including<i>Command Support</i>helping commanders in the human,cooperative and continuous process of evolving, evaluating, andexecuting solutions to their tasks. The command tools providethe means to formulate and visualize tasks, plans, andassessments, but also the means to visualize decisions on thedynamic design of organization. Also included in DISCCO is<i>Decision Support</i>, which, based on AI and simulationtechniques, improve the human process by integrating automaticand semiautomatic generation and evaluation of plans. The toolsprovided by DISCCO interact with a<i>Common Situation Model</i>capturing the recursive structureof the situation, including the status, the dynamicorganization, and the intentions, of own, allied, neutral, andhostile resources. Hence, DISCCOprovides a more comprehensivesituation description than has previously been possible toachieve.</p><p>DISCCO shows generic features since it is designed tosupport a decisionmaking process abstracted from the actualkinds and details of the tasks that are solved. Thus it will beuseful through all phases of the operation, through all commandlevels, and through all the different organizations andactivities that are involved.</p><p><b>Keywords:</b>Command and Control, Management, DecisionSupport, Data Fusion, Information Fusion, Situation Awareness,Network-Based Defence, Ontology.</p>
32

The application of relative navigation to civil air traffic management

Sangpetchsong, K. January 2000 (has links)
No description available.
33

Single and multiple stereo view navigation for planetary rovers

Bartolomé, Diego Rodríguez January 2013 (has links)
This thesis deals with the challenge of autonomous navigation of the ExoMars rover. The absence of global positioning systems (GPS) in space, added to the limitations of wheel odometry makes autonomous navigation based on these two techniques - as done in the literature - an inviable solution and necessitates the use of other approaches. That, among other reasons, motivates this work to use solely visual data to solve the robot’s Egomotion problem. The homogeneity of Mars’ terrain makes the robustness of the low level image processing technique a critical requirement. In the first part of the thesis, novel solutions are presented to tackle this specific problem. Detection of robust features against illumination changes and unique matching and association of features is a sought after capability. A solution for robustness of features against illumination variation is proposed combining Harris corner detection together with moment image representation. Whereas the first provides a technique for efficient feature detection, the moment images add the necessary brightness invariance. Moreover, a bucketing strategy is used to guarantee that features are homogeneously distributed within the images. Then, the addition of local feature descriptors guarantees the unique identification of image cues. In the second part, reliable and precise motion estimation for the Mars’s robot is studied. A number of successful approaches are thoroughly analysed. Visual Simultaneous Localisation And Mapping (VSLAM) is investigated, proposing enhancements and integrating it with the robust feature methodology. Then, linear and nonlinear optimisation techniques are explored. Alternative photogrammetry reprojection concepts are tested. Lastly, data fusion techniques are proposed to deal with the integration of multiple stereo view data. Our robust visual scheme allows good feature repeatability. Because of this, dimensionality reduction of the feature data can be used without compromising the overall performance of the proposed solutions for motion estimation. Also, the developed Egomotion techniques have been extensively validated using both simulated and real data collected at ESA-ESTEC facilities. Multiple stereo view solutions for robot motion estimation are introduced, presenting interesting benefits. The obtained results prove the innovative methods presented here to be accurate and reliable approaches capable to solve the Egomotion problem in a Mars environment.
34

Sensor fusion for boost phase interception of ballistic missiles

Humali, I. Gokhan 09 1900 (has links)
Approved for public release; distribution is unlimited / In the boost phase interception of ballistic missiles, determining the exact position of a ballistic missile has a significant importance. Several sensors are used to detect and track the missile. These sensors differ from each other in many different aspects. The outputs of radars give range, elevation and azimuth information of the target while space based infrared sensors give elevation and azimuth information. These outputs have to be combined (fused) achieve better position information for the missile. The architecture that is used in this thesis is decision level fusion architecture. This thesis examines four algorithms to fuse the results of radar sensors and space based infrared sensors. An averaging technique, a weighted averaging technique, a Kalman filtering approach and a Bayesian technique are compared. The ballistic missile boost phase segment and the sensors are modeled in MATLAB. The missile vector and dynamics are based upon Newton's laws and the simulation uses an earth-centered coordinate system. The Bayesian algorithm has the best performance resulting in a rms missile position error of less than 20 m. / 1st Lieutenant, Turkish Air Force
35

Assessing the operational value of situational awareness for AEGIS and Ship Self Defense System (SSDS) platforms through the application of the Knowledge Value Added (KVA) methodology

Uchytil, Joseph. 06 1900 (has links)
As the United States Navy strives to attain a myriad of situational awareness systems that provide the functionality and interoperability required for future missions, the fundamental idea of open architecture is beginning to promulgate throughout the Department. In order to make rational, informed decisions concerning the processes and systems that will be integrated to provide this situational awareness, an analytical method must be used to identify process deficiencies and produce quantifiable measurement indicators. This thesis will apply the Knowledge Value Added methodology to the current processes involved in track management aboard the AEGIS and Ship Self Defense System (SSDS) platforms. Additional analysis will be conducted based on notional changes that could occur were the systems designed using an open architecture approach. A valuation based on knowledge assets will be presented in order to.
36

Développement d'algorithmes de métrologie dédiés à la caractérisation de nano-objets à partir d'informations hétérogènes / Development of nano-object characterization algorithms from heterogeneous data

Derville, Alexandre 20 December 2018 (has links)
Ces travaux de thèse s’inscrivent dans le contexte technico/économique des nanomatériaux notamment les nanoparticules et les copolymères. Aujourd’hui, une révolution technologique est en cours avec l’introduction de ces matériaux dans des matrices plus ou moins complexes présentes dans notre quotidien (santé, cosmétique, bâtiment, agroalimentaire...). Ces matériaux confèrent à ces produits des propriétés uniques (mécanique, électrique, chimique, thermique, ...). Cette omniprésence associée aux enjeux économiques engendre deux problématiques liées au contrôle des procédés de fabrication et à la métrologie associée. La première est de garantir une traçabilité de ces nanomatériaux afin de prévenir tout risque sanitaire et environnemental et la seconde est d’optimiser le développement des procédés afin de pérenniser des filières économiques rentables. Pour cela, les deux techniques les plus courantes de métrologie utilisées sont : la microscopie électronique à balayage (MEB) et la microscopie à force atomique (AFM).Le premier volet des travaux est consacré au développement d’une méthodologie de fusion de données permettant d’analyser automatiquement les données en provenance de chaque microscope et d’utiliser leurs points forts respectifs afin de réduire les incertitudes de mesure en trois dimensions. Une première partie a été consacrée à la correction d’un défaut majeur d’asservissement de l’AFM qui génère des dérives et/ou sauts dans les signaux. Nous présentons une technique dirigée par les données permettant une correction de ces signaux. La méthode présentée a l’avantage de ne pas faire d’hypothèses sur les objets et leurs positions. Elle peut être utilisée en routine automatique pour l’amélioration du signal avant l’analyse des objets.La deuxième partie est consacrée au développement d’une méthode d’analyse automatique des images de nanoparticules sphériques en provenance d’un AFM ou d’un MEB. Dans le but de développer une traçabilité en 3D, il est nécessaire d’identifier et de mesurer les nanoparticules identiques qui ont été mesurées à la fois sur l’AFM et sur le MEB. Afin d’obtenir deux estimations du diamètre sur la même particule physique, nous avons développé une technique qui permet de mettre en correspondance les particules. Partant des estimations pour les deux types de microscopie, avec des particules présentes dans les deux types d'images ou non, nous présentons une technique qui permet l'agrégation d’estimateurs sur les populations de diamètres afin d'obtenir une valeur plus fiable des propriétés du diamètre des particules.Le second volet de cette thèse est dédié à l’optimisation d’un procédé de fabrication de copolymères à blocs (structures lamellaires) afin d’exploiter toutes les grandeurs caractéristiques utilisées pour la validation du procédé (largeur de ligne, période, rugosité, taux de défauts) notamment à partir d’images MEB afin de les mettre en correspondance avec un ensemble de paramètres de procédé. En effet, lors du développement d’un nouveau procédé, un plan d’expériences est effectué. L’analyse de ce dernier permet d’estimer manuellement une fenêtre de procédé plus ou moins précise (estimation liée à l’expertise de l’ingénieur matériaux). L’étape est réitérée jusqu’à l’obtention des caractéristiques souhaitées. Afin d’accélérer le développement, nous avons étudié une façon de prédire le résultat du procédé de fabrication sur l’espace des paramètres. Pour cela, nous avons étudié différentes techniques de régression que nous présentons afin de proposer une méthodologie automatique d’optimisation des paramètres d’un procédé alimentée par les caractéristiques d’images AFM et/ou MEB.Ces travaux d’agrégations d’estimateurs et d’optimisation de fenêtre de procédés permettent d’envisager le développement d’une standardisation d’analyse automatique de données issues de MEB et d’AFM en vue du développement d’une norme de traçabilité des nanomatériaux. / This thesis is included in the technical and economical context of nanomaterials, more specifically nanoparticles and block copolymer. Today, we observe a technological revolution with the introduction of these materials into matrices more or less complex present in our daily lives (health, cosmetics, buildings, food ...). These materials yield unique properties to these products (mechanical, electrical, chemical, thermal ...). This omnipresence associated with the economic stakes generates two problems related to the process control and associated metrology. The first is to ensure traceability of these nanomaterials in order to prevent any health and environmental risks and the second is to optimize the development of processes in order to sustain profitable economic sectors. For this, the two most common metrology techniques used are: scanning electron microscopy (SEM) and atomic force microscopy (AFM).The first phase of the work is devoted to the development of a data fusion methodology that automatically analyzes data from each microscope and uses their respective strengths to reduce measurement uncertainties in three dimensions. A first part was dedicated to the correction of a major defect of the AFM which generates drifts and / or jumps in the signals. We present a data-driven methodology, fast to implement and which accurately corrects these deviations. The proposed methodology makes no assumption on the object locations and can therefore be used as an efficient preprocessing routine for signal enhancement before object analysis.The second part is dedicated to the development of a method for automatic analysis of spherical nanoparticle images coming from an AFM or a SEM. In order to develop 3D traceability, it is mandatory to identify and measure the identical nanoparticles that have been measured on both AFM and SEM. In order to obtain two estimations of the diameter on the same physical particle, we developed a technique that allows to match the particles. Starting from estimates for both types of microscopy, with particles present in both kinds of images or not, we present a technique that allows the aggregation of estimators on diameter populations in order to obtain a more reliable value of properties of the particle diameter.The second phase of this thesis is dedicated to the optimization of a block copolymer process (lamellar structures) in order to capitalize on all the characteristic quantities used for the validation of the process (line width, period, roughness, defects rate) in particular from SEM images for the purpose of matching them with a set of process parameters.Indeed, during the development of a new process, an experimental plan is carried out. The analysis of the latter makes it possible to manually estimate a more or less precise process window (estimate related to the expertise of the materials engineer). The step is reiterated until the desired characteristics are obtained. In order to accelerate the development, we have studied a way of predicting the result of the process on the parameter space. For this, we studied different regression techniques that we present to propose an automatic methodology for optimizing the parameters of a process powered by AFM and / or SEM image characteristics.This work of estimator aggregation and process window optimization makes it possible to consider the development of a standardization of automatic analysis of SEM and AFM data for the development of a standard for traceability of nanomaterials.
37

Binocular geometry and camera motion directly from normal flows. / CUHK electronic theses & dissertations collection

January 2009 (has links)
Active vision systems are about mobile platform equipped with one or more than one cameras. They perceive what happens in their surroundings from the image streams the cameras grab. Such systems have a few fundamental tasks to tackle---they need to determine from time to time what their motion in space is, and should they have multiple cameras, they need to know how the cameras are relatively positioned so that visual information collected by the respective cameras can be related. In the simplest form, the tasks are about finding the motion of a camera, and finding the relative geometry of every two cameras, from the image streams the cameras collect. / On determining the ego-motion of a camera, there have been many previous works as well. However, again, most of the works require to track distinct features in the image stream or to infer the full optical flow field from the normal flow field. Different from the traditional works, utilizing no motion correspondence nor the epipolar geometry, a new method is developed that operates again on the normal flow data directly. The method has a number of features. It can employ the use of every normal flow data, thus requiring less texture from the image scene. A novel formulation of what the normal flow direction at an image position has to offer on the camera motion is given, and this formulation allows a locus of the possible camera motion be outlined from every data point. With enough data points or normal flows over the image domain, a simple voting scheme would allow the various loci intersect and pinpoint the camera motion. / On determining the relative geometry of two cameras, there already exist a number of calibration techniques in the literature. They are based on the presence of either some specific calibration objects in the imaged scene, or a portion of the scene that is observable by both cameras. However, in active vision, because of the "active" nature of the cameras, it could happen that a camera pair do not share much or anything in common in their visual fields. In the first part of this thesis, we propose a new solution method to the problem. The method demands image data under a rigid motion of the camera pair, but unlike the existing motion correspondence-based calibration methods it does not estimate the optical flows or motion correspondences explicitly. Instead it estimates the inter-camera geometry from the monocular normal flows. Moreover, we propose a strategy on selecting optimal groups of normal flow vectors to improve the accuracy and efficiency of the estimation. / The relative motion between a camera and the imaged environment generally induces a flow field in the image stream captured by the camera. The flow field, which is about motion correspondences of the various image positions over the image frames, is referred to as the optical flows in the literature. If the optical flow field of every camera can be made available, the motion of a camera can be readily determined, and so can the relative geometry of two cameras. However, due to the well-known aperture problem, directly observable at any image position is generally not the full optical flow, but only the component of it that is normal to the iso-brightness contour of the intensity profile at the position. The component is widely referred to as the normal flow. It is not impossible to infer the full flow field from the normal flow field, but then it requires some specific assumptions about the imaged scene, like it is smooth almost everywhere etc. / This thesis aims at exploring how the above two fundamental tasks can be tackled by operating on the normal flow field directly. The objective is, without the full flow inferred explicitly in the process, and in turn no specific assumption made about the imaged scene, the developed methods can be applicable to a wider set of scenes. The thesis consists of two parts. The first part is about how the inter-camera geometry of two cameras can be determined from the two monocular normal flow fields. The second part is about how a camera's ego-motion can be determined by examining only the normal flows the camera observes. / We have tested the methods on both synthetic image data and real image sequences. Experimental results show that the developed methods are effective in determining inter-camera geometry and camera motion from normal flow fields. / Yuan, Ding. / Adviser: Ronald Chung. / Source: Dissertation Abstracts International, Volume: 70-09, Section: B, page: . / Thesis submitted in: October 2008. / Thesis (Ph.D.)--Chinese University of Hong Kong, 2009. / Includes bibliographical references (leaves 121-131). / Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Electronic reproduction. [Ann Arbor, MI] : ProQuest Information and Learning, [200-] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Abstracts in English and Chinese. / School code: 1307.
38

Distress situation identification by multimodal data fusion for home healthcare telemonitoring / Identification de situation de détresse par la fusion de données multimodales pour la télévigilance médicale à domicile

Medjahed, Hamid 19 January 2010 (has links)
Aujourd'hui, la proportion des personnes âgées devient importante par rapport à l'ensemble de la population, et les capacités d'admission dans les hôpitaux sont limitées. En conséquence, plusieurs systèmes de télévigilance médicale ont été développés, mais il existe peu de solutions commerciales. Ces systèmes se concentrent soit sur la mise en oeuvre d’une architecture générique pour l'intégration des systèmes d'information médicale, soit sur l'amélioration de la vie quotidienne des patients en utilisant divers dispositifs automatiques avec alarme, soit sur l’offre de services de soins aux patients souffrant de certaines maladies comme l'asthme, le diabète, les problèmes cardiaques ou pulmonaires, ou la maladie d'Alzheimer. Dans ce contexte, un système automatique pour la télévigilance médicale à domicile est une solution pour faire face à ces problèmes et ainsi permettre aux personnes âgées de vivre en toute sécurité et en toute indépendance à leur domicile. Dans cette thèse, qui s’inscrit dans le cadre de la télévigilance médicale, un nouveau système de télévigilance médicale à plusieurs modalités nommé EMUTEM (Environnement Multimodale pour la Télévigilance Médicale) est présenté. Il combine et synchronise plusieurs modalités ou capteurs, grâce à une technique de fusion de données multimodale basée sur la logique floue. Ce système peut assurer une surveillance continue de la santé des personnes âgées. L'originalité de ce système avec la nouvelle approche de fusion est sa flexibilité à combiner plusieurs modalités de télévigilance médicale. Il offre un grand bénéfice aux personnes âgées en surveillant en permanence leur état de santé et en détectant d’éventuelles situations de détresse. / The population age increases in all societies throughout the world. In Europe, for example, the life expectancy for men is about 71 years and for women about 79 years. For North America the life expectancy, currently is about 75 for men and 81 for women. Moreover, the elderly prefer to preserve their independence, autonomy and way of life living at home the longest time possible. The current healthcare infrastructures in these countries are widely considered to be inadequate to meet the needs of an increasingly older population. Home healthcare monitoring is a solution to deal with this problem and to ensure that elderly people can live safely and independently in their own homes for as long as possible. Automatic in-home healthcare monitoring is a technological approach which helps people age in place by continuously telemonitoring. In this thesis, we explore automatic in-home healthcare monitoring by conducting a study of professionals who currently perform in-home healthcare monitoring, by combining and synchronizing various telemonitoring modalities,under a data synchronization and multimodal data fusion platform, FL-EMUTEM (Fuzzy Logic Multimodal Environment for Medical Remote Monitoring). This platform incorporates algorithms that process each modality and providing a technique of multimodal data fusion which can ensures a pervasive in-home health monitoring for elderly people based on fuzzy logic.The originality of this thesis which is the combination of various modalities in the home, about its inhabitant and their surroundings, will constitute an interesting benefit and impact for the elderly person suffering from loneliness. This work complements the stationary smart home environment in bringing to bear its capability for integrative continuous observation and detection of critical situations.
39

Optimizing Shipping Container Damage Prediction and Maritime Vessel Service Time in Commercial Maritime Ports Through High Level Information Fusion

Panchapakesan, Ashwin 09 September 2019 (has links)
The overwhelming majority of global trade is executed over maritime infrastructure, and port-side optimization problems are significant given that commercial maritime ports are hubs at which sea trade routes and land/rail trade routes converge. Therefore, optimizing maritime operations brings the promise of improvements with global impact. Major performance bottlenecks in maritime trade process include the handling of insurance claims on shipping containers and vessel service time at port. The former has high input dimensionality and includes data pertaining to environmental and human attributes, as well as operational attributes such as the weight balance of a shipping container; and therefore lends itself to multiple classification method- ologies, many of which are explored in this work. In order to compare their performance, a first-of-its-kind dataset was developed with carefully curated attributes. The performance of these methodologies was improved by exploring metalearning techniques to improve the collective performance of a subset of these classifiers. The latter problem formulated as a schedule optimization, solved with a fuzzy system to control port-side resource deployment; whose parameters are optimized by a multi-objective evolutionary algorithm which outperforms current industry practice (as mined from real-world data). This methodology has been applied to multiple ports across the globe to demonstrate its generalizability, and improves upon current industry practice even with synthetically increased vessel traffic.
40

A Framework for the Creation of a Unified Electronic Medical Record Using Biometrics, Data Fusion and Belief Theory

Leonard, Dwayne Christopher 13 December 2007 (has links)
The technology exists for the migration of healthcare data from its archaic paper-based system to an electronic one and once in digital form, to be transported anywhere in the world in a matter of seconds. The advent of universally accessible healthcare data benefits all participants, but one of the outstanding problems that must be addressed is how to uniquely identify and link a patient to his or her specific medical data. To date, a few solutions to this problem have been proposed that are limited in their effectiveness. We propose the use of biometric technology within our FIRD framework in solving the unique association of a patient to his or her medical data distinctively. This would allow a patient to have real time access to all of his or her recorded healthcare information electronically whenever it is necessary, securely with minimal effort, greater effectiveness, and ease.

Page generated in 0.029 seconds