• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 62
  • 12
  • 2
  • 1
  • 1
  • Tagged with
  • 88
  • 88
  • 28
  • 17
  • 15
  • 15
  • 14
  • 13
  • 13
  • 13
  • 12
  • 12
  • 12
  • 11
  • 11
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

Scalable Sensor Network Field Reconstruction with Robust Basis Pursuit

Schmidt, Aurora C. 01 May 2013 (has links)
We study a scalable approach to information fusion for large sensor networks. The algorithm, field inversion by consensus and compressed sensing (FICCS), is a distributed method for detection, localization, and estimation of a propagating field generated by an unknown number of point sources. The approach combines results in the areas of distributed average consensus and compressed sensing to form low dimensional linear projections of all sensor readings throughout the network, allowing each node to reconstruct a global estimate of the field. Compressed sensing is applied to continuous source localization by quantizing the potential locations of sources, transforming the model of sensor observations to a finite discretized linear model. We study the effects of structured modeling errors induced by spatial quantization and the robustness of ℓ1 penalty methods for field inversion. We develop a perturbations method to analyze the effects of spatial quantization error in compressed sensing and provide a model-robust version of noise-aware basis pursuit with an upperbound on the sparse reconstruction error. Numerical simulations illustrate system design considerations by measuring the performance of decentralized field reconstruction, detection performance of point phenomena, comparing trade-offs of quantization parameters, and studying various sparse estimators. The method is extended to time-varying systems using a recursive sparse estimator that incorporates priors into ℓ1 penalized least squares. This thesis presents the advantages of inter-sensor measurement mixing as a means of efficiently spreading information throughout a network, while identifying sparse estimation as an enabling technology for scalable distributed field reconstruction systems.
62

Threat Analysis Using Goal-Oriented Action Planning : Planning in the Light of Information Fusion

Bjarnolf, Philip January 2008 (has links)
<p>An entity capable of assessing its and others action capabilities possess the power to predict how the involved entities may change their world. Through this knowledge and higher level of situation awareness, the assessing entity may choose the actions that have the most suitable effect, resulting in that entity’s desired world state.</p><p>This thesis covers aspects and concepts of an arbitrary planning system and presents a threat analyzer architecture built on the novel planning system Goal-Oriented Action Planning (GOAP). This planning system has been suggested for an application for improved missile route planning and targeting, as well as being applied in contemporary computer games such as F.E.A.R. – First Encounter Assault Recon and S.T.A.L.K.E.R.: Shadow of Chernobyl. The GOAP architecture realized in this project is utilized by two agents that perform action planning to reach their desired world states. One of the agents employs a modified GOAP planner used as a threat analyzer in order to determine what threat level the adversary agent constitutes. This project does also introduce a conceptual schema of a general planning system that considers orders, doctrine and style; as well as a schema depicting an agent system using a blackboard in conjunction with the OODA-loop.</p>
63

Vers une approche non orientée pour l'évaluation de la qualité des odeurs / Towards a non oriented approach of the evaluation of the odor quality

Medjkoune, Massissilia 30 March 2018 (has links)
Caractériser la qualité d’une odeur est une tâche complexe qui consiste à identifier un ensemble de descripteurs qui synthétise au mieux la sensation olfactive au cours de séances d’analyse sensorielle. Généralement, cette caractérisation est une liste de descripteurs extraite d’un vocabulaire imposé par les industriels d’un domaine pour leurs analyses sensorielles. Ces analyses représentent un coût significatif pour les industriels chaque année. En effet, ces approches dites orientées reposent sur l’apprentissage de vocabulaires, limitent singulièrement les descripteurs pour un public non initié et nécessitent de couteuses phases d’apprentissage. Si cette caractérisation devait être confiée à des évaluateurs naïfs, le nombre de participants pourrait être significativement augmenté tout en réduisant le cout des analyses sensorielles. Malheureusement, chaque description libre n’est alors plus associée à un ensemble de descripteurs non ambigus, mais à un simple sac de mots en langage naturel (LN). Deux problématiques sont alors rattachées à la caractérisation d’odeurs. La première consiste à transformer des descriptions en LN en descripteurs structurés ; la seconde se donne pour objet de résumer un ensemble de descriptions formelles proposées par un panel d’évaluateurs en une synthèse unique et cohérente à des fins industrielles. Ainsi, la première partie de notre travail se focalise sur la définition et l’évaluation de modèles qui peuvent être utilisés pour résumer un ensemble de mots en un ensemble de descripteurs désambiguïsés. Parmi les différentes stratégies envisagées dans cette contribution, nous proposons de comparer des approches hybrides exploitant à la fois des bases de connaissances et des plongements lexicaux définis à partir de grands corpus de textes. Nos résultats illustrent le bénéfice substantiel à utiliser conjointement représentation symbolique et plongement lexical. Nous définissons ensuite de manière formelle le processus de synthèse d’un ensemble de concepts et nous proposons un modèle qui s’apparente à une forme d’intelligence humaine pour évaluer les résumés alternatifs au regard d’un objectif de synthèse donné. L’approche non orientée que nous proposons dans ce manuscrit apparait ainsi comme l’automatisation cognitive des tâches confiées aux opérateurs des séances d’analyse sensorielle. Elle ouvre des perspectives intéressantes pour développer des analyses sensorielles à grande échelle sur de grands panels d’évaluateurs lorsque l’on essaie notamment de caractériser les nuisances olfactives autour d’un site industriel. / Characterizing the quality of smells is a complex process that consists in identifying a set of descriptors best summarizing the olfactory sensation. Generally, this characterization results in a limited set of descriptors provided by sensorial analysis experts. These sensorial analysis sessions are however very costly for industrials. Indeed, such oriented approaches based on vocabulary learning limit, in a restrictive manner, the possible descriptors available for any uninitiated public, and therefore require a costly vocabulary-learning phase. If we could entrust this characterization to neophytes, the number of participants of a sensorial analysis session would be significantly enlarged while reducing costs. However, in that setting, each individual description is not related to a set of non-ambiguous descriptors anymore, but to a bag of terms expressed in natural language (NL). Two issues are then related to smell characterization implementing this approach. The first one is how to translate such NL descriptions into structured descriptors; the second one being how to summarize a set of individual characterizations into a consistent and synthetic unique characterization meaningful for professional purposes. Hence, this work focuses first on the definition and evaluation of models that can be used to summarize a set of terms into unambiguous entity identifiers selected from a given ontology. Among the several strategies explored in this contribution, we propose to compare hybrid approaches taking advantages of knowledge bases (symbolic representations) and word embeddings defined from large text corpora analysis. The results we obtain highlight the relative benefits of mixing symbolic representations with classic word embeddings for this task. We then formally define the problem of summarizing sets of concepts and we propose a model mimicking Human-like Intelligence for scoring alternative summaries with regard to a specific objective function. Interestingly, this non-oriented approach for identifying the quality of odors appears to be an actual cognitive automation of the task today performed by expert operators in sensorial analysis. It therefore opens interesting perspectives for developing scalable sensorial analyses based on large sets of evaluators when assessing, for instance, olfactory pollution around industrial sites.
64

Information fusion for scene understanding / Fusion d'informations pour la compréhesion de scènes

Xu, Philippe 28 November 2014 (has links)
La compréhension d'image est un problème majeur de la robotique moderne, la vision par ordinateur et l'apprentissage automatique. En particulier, dans le cas des systèmes avancés d'aide à la conduite, la compréhension de scènes routières est très importante. Afin de pouvoir reconnaître le grand nombre d’objets pouvant être présents dans la scène, plusieurs capteurs et algorithmes de classification doivent être utilisés. Afin de pouvoir profiter au mieux des méthodes existantes, nous traitons le problème de la compréhension de scènes comme un problème de fusion d'informations. La combinaison d'une grande variété de modules de détection, qui peuvent traiter des classes d'objets différentes et utiliser des représentations distinctes, est faites au niveau d'une image. Nous considérons la compréhension d'image à deux niveaux : la détection d'objets et la segmentation sémantique. La théorie des fonctions de croyance est utilisée afin de modéliser et combiner les sorties de ces modules de détection. Nous mettons l'accent sur la nécessité d'avoir un cadre de fusion suffisamment flexible afin de pouvoir inclure facilement de nouvelles classes d'objets, de nouveaux capteurs et de nouveaux algorithmes de détection d'objets. Dans cette thèse, nous proposons une méthode générale permettant de transformer les sorties d’algorithmes d'apprentissage automatique en fonctions de croyance. Nous étudions, ensuite, la combinaison de détecteurs de piétons en utilisant les données Caltech Pedestrian Detection Benchmark. Enfin, les données du KITTI Vision Benchmark Suite sont utilisées pour valider notre approche dans le cadre d'une fusion multimodale d'informations pour de la segmentation sémantique. / Image understanding is a key issue in modern robotics, computer vison and machine learning. In particular, driving scene understanding is very important in the context of advanced driver assistance systems for intelligent vehicles. In order to recognize the large number of objects that may be found on the road, several sensors and decision algorithms are necessary. To make the most of existing state-of-the-art methods, we address the issue of scene understanding from an information fusion point of view. The combination of many diverse detection modules, which may deal with distinct classes of objects and different data representations, is handled by reasoning in the image space. We consider image understanding at two levels : object detection ans semantic segmentation. The theory of belief functions is used to model and combine the outputs of these detection modules. We emphazise the need of a fusion framework flexible enough to easily include new classes, new sensors and new object detection algorithms. In this thesis, we propose a general method to model the outputs of classical machine learning techniques as belief functions. Next, we apply our framework to the combination of pedestrian detectors using the Caltech Pedestrain Detection Benchmark. The KITTI Vision Benchmark Suite is then used to validate our approach in a semantic segmentation context using multi-modal information
65

Seafloor classification with a multi-swath multi-beam echo sounder / Classification des fonds marins avec un SONAR multi-swath multifaisceaux

Nguyen, Trung Kiên 19 December 2017 (has links)
Cette thèse, co-dirigée par Jean-Marc Boucher et Ronan Fablet (IMT Atlantique) et co-encadrée par Didier Charlot (iXBlue), Gilles Le Chenadec et Michel Legris (ENSTA Bretagne), a été réalisée dans le cadre d'une convention CIFRE au sein de la société iXBlue. iXblue développe et commercialise un sondeur multifaisceaux (MBES) SEAPIX principalement dédié au marché de la pêche. Ce système a été développé pour offrir le meilleur compromis entre performances de détection et son coût de revient. Outre les caractéristiques classiques d'un MBES, il propose la particularité unique de pouvoir insonifier des fauchées différentes sous le navire par dépointage électronique du faisceau d'émission de bâbord à tribord et d'avant en arrière. Le travail de thèse a pour objectif d'étudier l'apport de ces nouvelles capacités multi-fauchées dans l'analyse et la classification des fonds marins. La première partie du travail a consisté à réaliser une analyse détaillée de la chaîne de mesure. Cette étude a permis d'évaluer la consistance des niveaux de rétrodiffusion entre les différents modes d'insonification. La deuxième partie s'est intéressée à la recherche des caractéristiques discriminantes du signal rétrodiffusé en tenant compte de la géométrie d'acquisition de chaque mode d'insonification. La dernière étape du travail a porté sur des méthodes de fusion des données acquises. Cette étude s'est réalisée en deux approches; la première considère des données venant du même mode d'insonification (intra-mode) et la seconde venant de modes différents (inter-mode), pour la cartographie des fonds marins. Les résultats expérimentaux obtenus mettent en évidence l'intérêt de la chaîne de traitement proposée et d'une architecture multi-mode sur les jeux de données réelles traitées. / This thesis, co-directed by Jean-Marc Boucher and Ronan Fablet (IMT Atlantique) and co-supervised by Didier Charlot (iXBlue), Gilles Le Chenadec and Michel Legris (ENSTA Bretagne), was realized in the context of a convention CIFRE with the company iXBlue.iXblue develops and commercializes a multibeam echosounder (MBES) SEAPIX primarily dedicated to the fishery market. The system is optimized to offer the best compromise between performances capabilities and cost. In addition to the classical characteristics of an MBES, it offers the unique feature of scanning the seafloor (and the water column volume) by electronical beamform multiple the emission swaths from port to starboard, as well as from forward to backward. The objective of the thesis is to study the contribution of these new multi-swath capacities in the analysis and classification of the seafloor.The first part of the work consisted in carrying out a detailed analysis of the measurement chain. This study evaluated the consistency in acquiring the backscattering strength from different insonification modes. The second part investigated the discriminant characteristics of the backscattered signal while taking into account the acquisition geometry of each insonification mode. The last stage of the work involved to methods of fusing the acquired data. This study was carried out in two approaches; the first considers data from the same insonification mode (intra-mode) and the second from different modes (inter-mode), for the seafloor classification. The obtained experimental results highlight the interest of the proposed processing chain and a multi-mode architecture on the real datasets.
66

Modelo de fusão dirigido por humanos e ciente de qualidade de informação

Botega, Leonardo Castro 26 January 2016 (has links)
Submitted by Izabel Franco (izabel-franco@ufscar.br) on 2016-10-11T12:19:26Z No. of bitstreams: 1 TeseLCB.pdf: 19957803 bytes, checksum: 66c9854c5f0067734f1a81f62cc661b0 (MD5) / Approved for entry into archive by Marina Freitas (marinapf@ufscar.br) on 2016-10-21T12:06:12Z (GMT) No. of bitstreams: 1 TeseLCB.pdf: 19957803 bytes, checksum: 66c9854c5f0067734f1a81f62cc661b0 (MD5) / Approved for entry into archive by Marina Freitas (marinapf@ufscar.br) on 2016-10-21T12:06:22Z (GMT) No. of bitstreams: 1 TeseLCB.pdf: 19957803 bytes, checksum: 66c9854c5f0067734f1a81f62cc661b0 (MD5) / Made available in DSpace on 2016-10-21T12:06:31Z (GMT). No. of bitstreams: 1 TeseLCB.pdf: 19957803 bytes, checksum: 66c9854c5f0067734f1a81f62cc661b0 (MD5) Previous issue date: 2016-01-26 / Não recebi financiamento / Situational Awareness (SAW) is a cognitive process widely spread in areas that require critical decision-making and refers to the level of consciousness that an individual or team has about a situation. In the emergency management domain, the situational information inferred by decision support systems affects the SAW of human operators, which is also influenced by the dynamicity and critical nature of the events. Failures in SAW, typically caused by high levels of stress, information overload and the inherent need to perform multiple tasks, can induce human operators to errors in decision-making, resulting in risks to life, assets or to the environment. Data fusion processes present opportunities to improve human operators’ SAW and enrich their knowledge on situations. However, problems related to the quality of information can lead to uncertainties, especially when human operators are also sources of information, requiring the restructuring of the fusion process. The state of the art of data and information fusion models presents approaches with limited participation of human operators, typically reactive, besides solutions that are restricted in mechanisms to manage the quality of information throughout the fusion process. Thus, the present work presents a new information fusion model, called Quantify (Quality-aware Human-driven Information Fusion Model), whose major differentials are the greater involvement of human operators and the use of the information quality management throughout the fusion process. In order to support the Quantify model, an innovative methodology was developed for the assessment and representation of data and information quality, called IQESA (Information Quality Assessment Methodology in the Context of Emergency Situation Awareness) specialized in the context of emergency situational awareness and which also involves the human operator. In order to validate the model and the methodology, a service-oriented architecture and two emergency situation assessment systems were developed, one guided by the Quantify model and another driven by the state-of-the-art model (User-Fusion). In a case study, robbery events reported to the emergency response service of the S˜ao Paulo State Military Police (Pol´ıcia Militar do Estado de S˜ao Paulo - PMESP) were submitted to the systems and then evaluated by the PMESP operators, revealing higher rates of SAW by the application of the Quantify model. These positive results confirm the need of this new model and methodology, besides revealing an opportunity to enrich the current emergency response system used by PMESP. / Situational Awareness (SAW) is a cognitive process widely spread in areas that require critical decision-making and refers to the level of consciousness that an individual or team has about a situation. In the emergency management domain, the situational information inferred by decision support systems affects the SAW of human operators, which is also influenced by the dynamicity and critical nature of the events. Failures in SAW, typically caused by high levels of stress, information overload and the inherent need to perform multiple tasks, can induce human operators to errors in decision-making, resulting in risks to life, assets or to the environment. Data fusion processes present opportunities to improve human operators’ SAW and enrich their knowledge on situations. However, problems related to the quality of information can lead to uncertainties, especially when human operators are also sources of information, requiring the restructuring of the fusion process. The state of the art of data and information fusion models presents approaches with limited participation of human operators, typically reactive, besides solutions that are restricted in mechanisms to manage the quality of information throughout the fusion process. Thus, the present work presents a new information fusion model, called Quantify (Quality-aware Human-driven Information Fusion Model), whose major differentials are the greater involvement of human operators and the use of the information quality management throughout the fusion process. In order to support the Quantify model, an innovative methodology was developed for the assessment and representation of data and information quality, called IQESA (Information Quality Assessment Methodology in the Context of Emergency Situation Awareness) specialized in the context of emergency situational awareness and which also involves the human operator. In order to validate the model and the methodology, a service-oriented architecture and two emergency situation assessment systems were developed, one guided by the Quantify model and another driven by the state-of-the-art model (User-Fusion). In a case study, robbery events reported to the emergency response service of the S˜ao Paulo State Military Police (Pol´ıcia Militar do Estado de S˜ao Paulo - PMESP) were submitted to the systems and then evaluated by the PMESP operators, revealing higher rates of SAW by the application of the Quantify model. These positive results confirm the need of this new model and methodology, besides revealing an opportunity to enrich the current emergency response system used by PMESP. / Consciência Situacional (Situational Awareness - SAW) é um processo cognitivo amplamente difundido em áreas que demandam a tomada de decisão critica e se refere ao nível de consciência que um indivíduo ou equipe detém sobre uma situação. No domínio de gerenciamento de emergências, a informação situacional inferida por sistemas de apoio à decisão afeta a SAW de operadores humanos, a qual é também influenciada pela dinamicidade e natureza crítica dos eventos. Falhas de SAW, tipicamente provocadas pelo alto nível de stress, sobrecarga de informação e pela inerente necessidade de realização de múltiplas tarefas, podem induzir operadores humanos a erros no processo decisório e acarretar riscos `a vida, ao patrimônio ou ao meio ambiente. Processos de fusão de dados apresentam oportunidades para aprimorar a SAW de operadores humanos e enriquecer o seu conhecimento sobre situações. Entretanto, problemas referentes `a qualidade da informação podem gerar incertezas, principalmente quando operadores humanos são também fontes de informação, demandando assim a reestruturação do processo de fusão. O estado da arte em modelos de fusão de dados e informações apresenta abordagens com limitada participação de humanos, tipicamente reativa, além das soluções serem restritas em mecanismos para gerir a qualidade da informação. Assim, este trabalho apresenta um novo modelo de fusão de informações, denominado Quantify (Quality-Aware Human-Driven Information Fusion Model), cujos principais diferenciais são a intensificação da participação humana e o emprego continuo da gestão da qualidade da informação ao longo do processo de fusão. Em suporte ao modelo Quantify, foi desenvolvida uma metodologia inovadora para a avaliação e representação da qualidade de dados e informações, denominada IQESA (Information Quality Assessment Methodology in the Context of Emergency Situation Awareness), especializada no contexto de consciência situacional de emergências e que também envolve o operador humano. Para validar o modelo e a metodologia, uma arquitetura orientada a serviços e dois sistemas de avaliação de situações de emergência foram desenvolvidos, um deles orientado pelo modelo Quantify e outro dirigido pelo modelo do estado da arte (User-Fusion). Em estudo de caso, eventos de roubo relatados ao serviço de atendimento a emergências da Polícia Militar do Estado de São Paulo (PMESP) foram submetidos aos sistemas e avaliados por operadores da PMESP, revelando índices superiores de SAW pelo emprego do modelo Quantify. Tais resultados positivos corroboram com a necessidade deste novo modelo e metodologia, além de revelar uma oportunidade de enriquecimento do sistema atual de atendimento a emergências utilizado pela PMES
67

Predictive Techniques and Methods for Decision Support in Situations with Poor Data Quality

König, Rikard January 2009 (has links)
Today, decision support systems based on predictive modeling are becoming more common, since organizations often collectmore data than decision makers can handle manually. Predictive models are used to find potentially valuable patterns in the data, or to predict the outcome of some event. There are numerous predictive techniques, ranging from simple techniques such as linear regression,to complex powerful ones like artificial neural networks. Complexmodels usually obtain better predictive performance, but are opaque and thus cannot be used to explain predictions or discovered patterns.The design choice of which predictive technique to use becomes even harder since no technique outperforms all others over a large set of problems. It is even difficult to find the best parameter values for aspecific technique, since these settings also are problem dependent.One way to simplify this vital decision is to combine several models, possibly created with different settings and techniques, into an ensemble. Ensembles are known to be more robust and powerful than individual models, and ensemble diversity can be used to estimate the uncertainty associated with each prediction.In real-world data mining projects, data is often imprecise, contain uncertainties or is missing important values, making it impossible to create models with sufficient performance for fully automated systems.In these cases, predictions need to be manually analyzed and adjusted.Here, opaque models like ensembles have a disadvantage, since theanalysis requires understandable models. To overcome this deficiencyof opaque models, researchers have developed rule extractiontechniques that try to extract comprehensible rules from opaquemodels, while retaining sufficient accuracy.This thesis suggests a straightforward but comprehensive method forpredictive modeling in situations with poor data quality. First,ensembles are used for the actual modeling, since they are powerful,robust and require few design choices. Next, ensemble uncertaintyestimations pinpoint predictions that need special attention from adecision maker. Finally, rule extraction is performed to support theanalysis of uncertain predictions. Using this method, ensembles can beused for predictive modeling, in spite of their opacity and sometimesinsufficient global performance, while the involvement of a decisionmaker is minimized.The main contributions of this thesis are three novel techniques that enhance the performance of the purposed method. The first technique deals with ensemble uncertainty estimation and is based on a successful approach often used in weather forecasting. The other twoare improvements of a rule extraction technique, resulting in increased comprehensibility and more accurate uncertainty estimations. / <p><b>Sponsorship</b>:</p><p>This work was supported by the Information Fusion Research</p><p>Program (www.infofusion.se) at the University of Skövde, Sweden, in</p><p>partnership with the Swedish Knowledge Foundation under grant</p><p>2003/0104.</p>
68

Système coopératif de perception et de communication pour la protection des usagers vulnérables / Cooperative perception and communication system for the protection of vulnerable road users

Merdrignac, Pierre 16 October 2015 (has links)
Les systèmes de transports intelligents coopératifs (C-ITS) offrent des opportunités pour améliorer la sécurité routière et particulièrement la sécurité des usagers vulnérables (VRU), e.g., piétons et cyclistes. La principale source d'accidents provient de l'incapacité des usagers, véhicules et VRUs, à détecter le danger avant qu'une collision soit inévitable. Nous introduisons un système de perception qui s'appuie sur les données des capteurs laser et caméra pour estimer l'état des VRUs entourant le véhicule. Une technique de classification multi-classes des obstacles routiers à partir de données laser a été développée en utilisant une méthode d'apprentissage statistique et une estimation bayésienne. Nous proposons une architecture de communication véhicules-piétons (V2P) qui prend en compte les faibles ressources énergétiques des smartphones transportés par les piétons. Notre solution s'appuie sur les standards définis dans l'architecture de communication véhiculaire ETSI ITS et propose une dissémination géographique pour la communication V2P. Un système coopératif perception/communication a le potentiel de gérer des scénarios de plus en plus complexes en combinant la capacité de la perception à estimer l'état dynamique des obstacles détectés et la capacité de la communication à échanger un contenu riche entre des usagers éloignés. Nous introduisons une fusion multi-hypothèses entre les informations de perception et de communication et une application pour smartphone destinée à protéger les VRUs des dangers de la route. Les solutions proposées au cours de la thèse sont évaluées sur des données réelles. Nous avons mené des expérimentations sur le campus d'INRIA démontrant les atouts d'un système coopératif de protection des usagers vulnérables. / Cooperative intelligent transportation systems (C-ITS) have the opportunity to enhance road safety, especially the safety of vulnerable road users (VRU), e.g., pedestrians and cyclists. Road accidents are mainly due to vehicles' and VRUs' inability to detect the danger before a collision cannot be avoided.We introduce a perception system based on laser and camera sensors to estimate the state of VRUs located around the vehicle. A multi-class classification of road obstacles based on laser data has been developed using statistical machine learning and Bayesian estimation.We propose an architecture for vehicles-to-pedestrians (V2P) communication which considers the weak energy resources of the devices carried by pedestrians such as smartphones. Our solution is relying on the standards defined by ETSI ITS architecture for vehicular communication and proposes geographical dissemination for V2P communication.A cooperative perception/communication system can deal with scenarios which are becoming more and more complex by combining the ability of perception to estimate the dynamic state of detected obstacles and the ability of communication to exchange a rich content between distant users. We introduce a multi-hypotheses fusion between perception and communication information and a smartphone application dedicated to protect VRUs from road danger.The solutions proposed during this thesis are evaluated on real data. We carried out real experiments on INRIA campus demonstrating the assets of a cooperative system for the protection of vulnerable road users.
69

Evidential calibration and fusion of multiple classifiers : application to face blurring / Calibration et fusion évidentielles de classifieurs : application à l'anonymisation de visages

Minary, Pauline 08 December 2017 (has links)
Afin d’améliorer les performances d’un problème de classification, une piste de recherche consiste à utiliser plusieurs classifieurs et à fusionner leurs sorties. Pour ce faire, certaines approches utilisent une règle de fusion. Cela nécessite que les sorties soient d’abord rendues comparables, ce qui est généralement effectué en utilisant une calibration probabiliste de chaque classifieur. La fusion peut également être réalisée en concaténant les sorties et en appliquant à ce vecteur une calibration probabiliste conjointe. Récemment, des extensions des calibrations d’un classifieur individuel ont été proposées en utilisant la théorie de l’évidence, afin de mieux représenter les incertitudes. Premièrement, cette idée est adaptée aux techniques de calibrations probabilistes conjointes, conduisant à des versions évidentielles. Cette approche est comparée à celles mentionnées ci-dessus sur des jeux de données de classification classiques. Dans la seconde partie, le problème d’anonymisation de visages sur des images, auquel SNCF doit répondre, est considéré. Une méthode consiste à utiliser plusieurs détecteurs de visages, qui retournent des boites et des scores de confiance associés, et à combiner ces sorties avec une étape d’association et de calibration évidentielle. Il est montré que le raisonnement au niveau pixel est plus intéressant que celui au niveau boite et que, parmi les approches de fusion abordées dans la première partie, la calibration conjointe évidentielle donne les meilleurs résultats. Enfin, le cas des images provenant de vidéos est considéré. Pour tirer parti de l’information contenue dans les vidéos, un algorithme de suivi classique est ajouté au système. / In order to improve overall performance of a classification problem, a path of research consists in using several classifiers and to fuse their outputs. To perform this fusion, some approaches merge the outputs using a fusion rule. This requires that the outputs be made comparable beforehand, which is usually done using a probabilistic calibration of each classifier. The fusion can also be performed by concatenating the classifier outputs into a vector, and applying a joint probabilistic calibration to it. Recently, extensions of probabilistic calibrations of an individual classifier have been proposed using evidence theory, in order to better represent the uncertainties inherent to the calibration process. In the first part of this thesis, this latter idea is adapted to joint probabilistic calibration techniques, leading to evidential versions. This approach is then compared to the aforementioned ones on classical classification datasets. In the second part, the challenging problem of blurring faces on images, which SNCF needs to address, is tackled. A state-of-the-art method for this problem is to use several face detectors, which return boxes with associated confidence scores, and to combine their outputs using an association step and an evidential calibration. In this report, it is shown that reasoning at the pixel level is more interesting than reasoning at the box-level, and that among the fusion approaches discussed in the first part, the evidential joint calibration yields the best results. Finally, the case of images coming from videos is considered. To leverage the information contained in videos, a classical tracking algorithm is added to the blurring system.
70

Fusion d'informations par la théorie de l'évidence pour la segmentation d'images / Information fusion using theory of evidence for image segmentation

Chahine, Chaza 31 October 2016 (has links)
La fusion d’informations a été largement étudiée dans le domaine de l’intelligence artificielle. Une information est en général considérée comme imparfaite. Par conséquent, la combinaison de plusieurs sources d’informations (éventuellement hétérogènes) peut conduire à une information plus globale et complète. Dans le domaine de la fusion on distingue généralement les approches probabilistes et non probabilistes dont fait partie la théorie de l’évidence, développée dans les années 70. Cette méthode permet de représenter à la fois, l’incertitude et l’imprécision de l’information, par l’attribution de fonctions de masses qui s’appliquent non pas à une seule hypothèse (ce qui est le cas le plus courant pour les méthodes probabilistes) mais à un ensemble d’hypothèses. Les travaux présentés dans cette thèse concernent la fusion d’informations pour la segmentation d’images.Pour développer cette méthode nous sommes partis de l’algorithme de la « Ligne de Partage des Eaux » (LPE) qui est un des plus utilisés en détection de contours. Intuitivement le principe de la LPE est de considérer l’image comme un relief topographique où la hauteur d’un point correspond à son niveau de gris. On suppose alors que ce relief se remplit d’eau par des sources placées au niveau des minima locaux de l’image, formant ainsi des bassins versants. Les LPE sont alors les barrages construits pour empêcher les eaux provenant de différents bassins de se mélanger. Un problème de cette méthode de détection de contours est que la LPE directement appliquée sur l’image engendre une sur-segmentation, car chaque minimum local engendre une région. Meyer et Beucher ont proposé de résoudre cette question en spécifiant un ensemble de marqueurs qui seront les seules sources d’inondation du relief. L'extraction automatique des marqueurs à partir des images ne conduit pas toujours à un résultat satisfaisant, en particulier dans le cas d'images complexes. Plusieurs méthodes ont été proposées pour déterminer automatiquement ces marqueurs.Nous nous sommes en particulier intéressés à l’approche stochastique d’Angulo et Jeulin qui estiment une fonction de densité de probabilité (fdp) d'un contour (LPE) après M simulations de la segmentation LPE classique. N marqueurs sont choisis aléatoirement pour chaque réalisation. Par conséquent, une valeur de fdp élevée est attribuée aux points de contours correspondant aux fortes réalisations. Mais la décision d’appartenance d’un point à la « classe contour » reste dépendante d’une valeur de seuil. Un résultat unique ne peut donc être obtenu.Pour augmenter la robustesse de cette méthode et l’unicité de sa réponse, nous proposons de combiner des informations grâce à la théorie de l’évidence.La LPE se calcule généralement à partir de l’image gradient, dérivée du premier ordre, qui donne une information globale sur les contours dans l’image. Alors que la matrice Hessienne, matrice des dérivées d’ordre secondaire, donne une information plus locale sur les contours. Notre objectif est donc de combiner ces deux informations de nature complémentaire en utilisant la théorie de l’évidence. Les différentes versions de la fusion sont testées sur des images réelles de la base de données Berkeley. Les résultats sont comparés avec cinq segmentations manuelles fournies, en tant que vérités terrain, avec cette base de données. La qualité des segmentations obtenues par nos méthodes sont fondées sur différentes mesures: l’uniformité, la précision, l’exactitude, la spécificité, la sensibilité ainsi que la distance métrique de Hausdorff / Information fusion has been widely studied in the field of artificial intelligence. Information is generally considered imperfect. Therefore, the combination of several sources of information (possibly heterogeneous) can lead to a more comprehensive and complete information. In the field of fusion are generally distinguished probabilistic approaches and non-probabilistic ones which include the theory of evidence, developed in the 70s. This method represents both the uncertainty and imprecision of the information, by assigning masses not only to a hypothesis (which is the most common case for probabilistic methods) but to a set of hypothesis. The work presented in this thesis concerns the fusion of information for image segmentation.To develop this method we start with the algorithm of Watershed which is one of the most used methods for edge detection. Intuitively the principle of the Watershed is to consider the image as a landscape relief where heights of the different points are associated with grey levels. Assuming that the local minima are pierced with holes and the landscape is immersed in a lake, the water filled up from these minima generate the catchment basins, whereas watershed lines are the dams built to prevent mixing waters coming from different basins.The watershed is practically applied to the gradient magnitude, and a region is associated with each minimum. Therefore the fluctuations in the gradient image and the great number of local minima generate a large set of small regions yielding an over segmented result which can hardly be useful. Meyer and Beucher proposed seeded watershed or marked-controlled watershed to surmount this oversegmentation problem. The essential idea of the method is to specify a set of markers (or seeds) to be considered as the only minima to be flooded by water. The number of detected objects is therefore equal to the number of seeds and the result is then markers dependent. The automatic extraction of markers from the images does not lead to a satisfying result especially in the case of complex images. Several methods have been proposed for automatically determining these markers.We are particularly interested in the stochastic approach of Angulo and Jeulin who calculate a probability density function (pdf) of contours after M simulations of segmentation using conventional watershed with N markers randomly selected for each simulation. Therefore, a high pdf value is assigned to strong contour points that are more detected through the process. But the decision that a point belong to the "contour class" remains dependent on a threshold value. A single result cannot be obtained.To increase the robustness of this method and the uniqueness of its response, we propose to combine information with the theory of evidence.The watershed is generally calculated on the gradient image, first order derivative, which gives comprehensive information on the contours in the image.While the Hessian matrix, matrix of second order derivatives, gives more local information on the contours. Our goal is to combine these two complementary information using the theory of evidence. The method is tested on real images from the Berkeley database. The results are compared with five manual segmentation provided as ground truth, with this database. The quality of the segmentation obtained by our methods is tested with different measures: uniformity, precision, recall, specificity, sensitivity and the Hausdorff metric distance

Page generated in 0.2145 seconds