• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 191
  • 53
  • 19
  • 18
  • 8
  • 5
  • 4
  • 4
  • 3
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 357
  • 357
  • 96
  • 65
  • 64
  • 61
  • 52
  • 50
  • 50
  • 36
  • 35
  • 35
  • 34
  • 33
  • 32
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
261

Design and Test of Algorithms for the Evaluation of Modern Sensors in Close-Range Photogrammetry / Entwicklung und Test von Algorithmen für die 3D-Auswertung von Daten moderner Sensorsysteme in der Nahbereichsphotogrammetrie

Scheibe, Karsten 01 December 2006 (has links)
No description available.
262

An efficient approach for high-fidelity modeling incorporating contour-based sampling and uncertainty

Crowley, Daniel R. 13 January 2014 (has links)
During the design process for an aerospace vehicle, decision-makers must have an accurate understanding of how each choice will affect the vehicle and its performance. This understanding is based on experiments and, increasingly often, computer models. In general, as a computer model captures a greater number of phenomena, its results become more accurate for a broader range of problems. This improved accuracy typically comes at the cost of significantly increased computational expense per analysis. Although rapid analysis tools have been developed that are sufficient for many design efforts, those tools may not be accurate enough for revolutionary concepts subject to grueling flight conditions such as transonic or supersonic flight and extreme angles of attack. At such conditions, the simplifying assumptions of the rapid tools no longer hold. Accurate analysis of such concepts would require models that do not make those simplifying assumptions, with the corresponding increases in computational effort per analysis. As computational costs rise, exploration of the design space can become exceedingly expensive. If this expense cannot be reduced, decision-makers would be forced to choose between a thorough exploration of the design space using inaccurate models, or the analysis of a sparse set of options using accurate models. This problem is exacerbated as the number of free parameters increases, limiting the number of trades that can be investigated in a given time. In the face of limited resources, it can become critically important that only the most useful experiments be performed, which raises multiple questions: how can the most useful experiments be identified, and how can experimental results be used in the most effective manner? This research effort focuses on identifying and applying techniques which could address these questions. The demonstration problem for this effort was the modeling of a reusable booster vehicle, which would be subject to a wide range of flight conditions while returning to its launch site after staging. Contour-based sampling, an adaptive sampling technique, seeks cases that will improve the prediction accuracy of surrogate models for particular ranges of the responses of interest. In the case of the reusable booster, contour-based sampling was used to emphasize configurations with small pitching moments; the broad design space included many configurations which produced uncontrollable aerodynamic moments for at least one flight condition. By emphasizing designs that were likely to trim over the entire trajectory, contour-based sampling improves the predictive accuracy of surrogate models for such designs while minimizing the number of analyses required. The simplified models mentioned above, although less accurate for extreme flight conditions, can still be useful for analyzing performance at more common flight conditions. The simplified models may also offer insight into trends in the response behavior. Data from these simplified models can be combined with more accurate results to produce useful surrogate models with better accuracy than the simplified models but at less cost than if only expensive analyses were used. Of the data fusion techniques evaluated, Ghoreyshi cokriging was found to be the most effective for the problem at hand. Lastly, uncertainty present in the data was found to negatively affect predictive accuracy of surrogate models. Most surrogate modeling techniques neglect uncertainty in the data and treat all cases as deterministic. This is plausible, especially for data produced by computer analyses which are assumed to be perfectly repeatable and thus truly deterministic. However, a number of sources of uncertainty, such as solver iteration or surrogate model prediction accuracy, can introduce noise to the data. If these sources of uncertainty could be captured and incorporated when surrogate models are trained, the resulting surrogate models would be less susceptible to that noise and correspondingly have better predictive accuracy. This was accomplished in the present effort by capturing the uncertainty information via nuggets added to the Kriging model. By combining these techniques, surrogate models could be created which exhibited better predictive accuracy while selecting the most informative experiments possible. This significantly reduced the computational effort expended compared to a more standard approach using space-filling samples and data from a single source. The relative contributions of each technique were identified, and observations were made pertaining to the most effective way to apply the separate and combined methods.
263

Infrastructure mediated sensing

Patel, Shwetak Naran 08 July 2008 (has links)
Ubiquitous computing application developers have limited options for a practical activity and location sensing technology that is easy-to-deploy and cost-effective. In this dissertation, I have developed a class of activity monitoring systems called infrastructure mediated sensing (IMS), which provides a whole-house solution for sensing activity and the location of people and objects. Infrastructure mediated sensing leverages existing home infrastructure (e.g, electrical systems, air conditioning systems, etc.) to mediate the transduction of events. In these systems, infrastructure activity is used as a proxy for a human activity involving the infrastructure. A primary goal of this type of system is to reduce economic, aesthetic, installation, and maintenance barriers to adoption by reducing the cost and complexity of deploying and maintaining the activity sensing hardware. I discuss the design, development, and applications of various IMS-based activity and location sensing technologies that leverage the following existing infrastructures: wireless Bluetooth signals, power lines, and central heating, ventilation, and air conditioning (HVAC) systems. In addition, I show how these technologies facilitate automatic and unobtrusive sensing and data collection for researchers or application developers interested in conducting large-scale in-situ location-based studies in the home.
264

Synchronous HMMs for audio-visual speech processing

Dean, David Brendan January 2008 (has links)
Both human perceptual studies and automaticmachine-based experiments have shown that visual information from a speaker's mouth region can improve the robustness of automatic speech processing tasks, especially in the presence of acoustic noise. By taking advantage of the complementary nature of the acoustic and visual speech information, audio-visual speech processing (AVSP) applications can work reliably in more real-world situations than would be possible with traditional acoustic speech processing applications. The two most prominent applications of AVSP for viable human-computer-interfaces involve the recognition of the speech events themselves, and the recognition of speaker's identities based upon their speech. However, while these two fields of speech and speaker recognition are closely related, there has been little systematic comparison of the two tasks under similar conditions in the existing literature. Accordingly, the primary focus of this thesis is to compare the suitability of general AVSP techniques for speech or speaker recognition, with a particular focus on synchronous hidden Markov models (SHMMs). The cascading appearance-based approach to visual speech feature extraction has been shown to work well in removing irrelevant static information from the lip region to greatly improve visual speech recognition performance. This thesis demonstrates that these dynamic visual speech features also provide for an improvement in speaker recognition, showing that speakers can be visually recognised by how they speak, in addition to their appearance alone. This thesis investigates a number of novel techniques for training and decoding of SHMMs that improve the audio-visual speech modelling ability of the SHMM approach over the existing state-of-the-art joint-training technique. Novel experiments are conducted within to demonstrate that the reliability of the two streams during training is of little importance to the final performance of the SHMM. Additionally, two novel techniques of normalising the acoustic and visual state classifiers within the SHMM structure are demonstrated for AVSP. Fused hidden Markov model (FHMM) adaptation is introduced as a novel method of adapting SHMMs from existing wellperforming acoustic hidden Markovmodels (HMMs). This technique is demonstrated to provide improved audio-visualmodelling over the jointly-trained SHMMapproach at all levels of acoustic noise for the recognition of audio-visual speech events. However, the close coupling of the SHMM approach will be shown to be less useful for speaker recognition, where a late integration approach is demonstrated to be superior.
265

Using dynamic time warping for multi-sensor fusion

Ko, Ming Hsiao January 2009 (has links)
Fusion is a fundamental human process that occurs in some form at all levels of sense organs such as visual and sound information received from eyes and ears respectively, to the highest levels of decision making such as our brain fuses visual and sound information to make decisions. Multi-sensor data fusion is concerned with gaining information from multiple sensors by fusing across raw data, features or decisions. The traditional frameworks for multi-sensor data fusion only concern fusion at specific points in time. However, many real world situations change over time. When the multi-sensor system is used for situation awareness, it is useful not only to know the state or event of the situation at a point in time, but also more importantly, to understand the causalities of those states or events changing over time. / Hence, we proposed a multi-agent framework for temporal fusion, which emphasises the time dimension of the fusion process, that is, fusion of the multi-sensor data or events derived over a period of time. The proposed multi-agent framework has three major layers: hardware, agents, and users. There are three different fusion architectures: centralized, hierarchical, and distributed, for organising the group of agents. The temporal fusion process of the proposed framework is elaborated by using the information graph. Finally, the core of the proposed temporal fusion framework – Dynamic Time Warping (DTW) temporal fusion agent is described in detail. / Fusing multisensory data over a period of time is a challenging task, since the data to be fused consists of complex sequences that are multi–dimensional, multimodal, interacting, and time–varying in nature. Additionally, performing temporal fusion efficiently in real–time is another challenge due to the large amount of data to be fused. To address these issues, we proposed the DTW temporal fusion agent that includes four major modules: data pre-processing, DTW recogniser, class templates, and decision making. The DTW recogniser is extended in various ways to deal with the variability of multimodal sequences acquired from multiple heterogeneous sensors, the problems of unknown start and end points, multimodal sequences of the same class that hence has different lengths locally and/or globally, and the challenges of online temporal fusion. / We evaluate the performance of the proposed DTW temporal fusion agent on two real world datasets: 1) accelerometer data acquired from performing two hand gestures, and 2) a benchmark dataset acquired from carrying a mobile device and performing pre-defined user scenarios. Performance results of the DTW based system are compared with those of a Hidden Markov Model (HMM) based system. The experimental results from both datasets demonstrate that the proposed DTW temporal fusion agent outperforms HMM based systems, and has the capability to perform online temporal fusion efficiently and accurately in real–time.
266

Fusions multimodales pour la recherche d'humains par un robot mobile / Multimodal fusions for human detection by a mobile robot

Labourey, Quentin 19 May 2017 (has links)
Dans ce travail, nous considérons le cas d'un robot mobile d'intérieur dont l'objectif est de détecter les humains présents dans l'environnement et de se positionner physiquement par rapport à eux, dans le but de mieux percevoir leur état. Pour cela, le robot dispose de différents capteurs (capteur RGB-Depth, microphones, télémètre laser). Des contributions de natures variées ont été effectuées :Classification d'événements sonores en environnement intérieur : La méthode de classification proposée repose sur une taxonomie de petite taille et est destinée à différencier les marqueurs de la présence humaine. L'utilisation de fonctions de croyance permet de prendre en compte l'incertitude de la classification, et de labelliser un son comme « inconnu ».Fusion audiovisuelle pour la détection de locuteurs successifs dans une conversation : Une méthode de détection de locuteurs est proposée dans le cas du robot immobile, placé comme témoin d'une interaction sociale. Elle repose sur une fusion audiovisuelle probabiliste. Cette méthode a été testée sur des vidéos acquises par le robot.Navigation dédiée à la détection d'humains à l'aide d'une fusion multimodale : A partir d'informations provenant des capteurs hétérogènes, le robot cherche des humains de manière autonome dans un environnement connu. Les informations sont fusionnées au sein d'une grille de perception multimodale. Cette grille permet au robot de prendre une décision quant à son prochain déplacement, à l'aide d'un automate reposant sur des niveaux de priorité des informations perçues. Ce système a été implémenté et testé sur un robot Q.bo.Modélisation crédibiliste de l'environnement pour la navigation : La construction de la grille de perception multimodale est améliorée à l'aide d'un mécanisme de fusion reposant sur la théorie des fonctions de croyance. Ceci permet au robot de maintenir une grille « évidentielle » dans le temps comprenant l'information perçue et son incertitude. Ce système a d'abord été évalué en simulation, puis sur le robot Q.bo. / In this work, we consider the case of mobile robot that aims at detecting and positioning itself with respect to humans in its environment. In order to fulfill this mission, the robot is equipped with various sensors (RGB-Depth, microphones, laser telemeter). This thesis contains contributions of various natures:Sound classification in indoor environments: A small taxonomy is proposed in a classification method destined to enable a robot to detect human presence. Uncertainty of classification is taken into account through the use of belief functions, allowing us to label a sound as "unknown".Speaker tracking thanks to audiovisual data fusion: The robot is witness to a social interaction and tracks the successive speakers with probabilistic audiovisual data fusion. The proposed method was tested on videos extracted from the robot's sensors.Navigation dedicated to human detection thanks to a multimodal fusion:} The robot autonomously navigates in a known environment to detect humans thanks to heterogeneous sensors. The data is fused to create a multimodal perception grid. This grid enables the robot to chose its destinations, depending on the priority of perceived information. This system was implemented and tested on a Q.bo robot.Credibilist modelization of the environment for navigation: The creation of the multimodal perception grid is improved by the use of credibilist fusion. This enables the robot to maintain an evidential grid in time, containing the perceived information and its uncertainty. This system was implemented in simulation first, and then on a Q.bo robot.
267

Méthodes pour l'analyse des champs profonds extragalactiques MUSE : démélange et fusion de données hyperspectrales ;détection de sources étendues par inférence à grande échelle / Methods for the analysis of extragalactic MUSE deep fields : hyperspectral unmixing and data fusion;detection of extented sources with large-scale inference

Bacher, Raphael 08 November 2017 (has links)
Ces travaux se placent dans le contexte de l'étude des champs profonds hyperspectraux produits par l'instrument d'observation céleste MUSE. Ces données permettent de sonder l'Univers lointain et d'étudier les propriétés physiques et chimiques des premières structures galactiques et extra-galactiques. La première problématique abordée dans cette thèse est l'attribution d'une signature spectrale pour chaque source galactique. MUSE étant un instrument au sol, la turbulence atmosphérique dégrade fortement le pouvoir de résolution spatiale de l'instrument, ce qui génère des situations de mélange spectral pour un grand nombre de sources. Pour lever cette limitation, des approches de fusion de données, s'appuyant sur les données complémentaires du télescope spatial Hubble et d'un modèle de mélange linéaire, sont proposées, permettant la séparation spectrale des sources du champ. Le second objectif de cette thèse est la détection du Circum-Galactic Medium (CGM). Le CGM, milieu gazeux s'étendant autour de certaines galaxies, se caractérise par une signature spatialement diffuse et de faible intensité spectrale. Une méthode de détection de cette signature par test d'hypothèses est développée, basée sur une stratégie de max-test sur un dictionnaire et un apprentissage des statistiques de test sur les données. Cette méthode est ensuite étendue pour prendre en compte la structure spatiale des sources et ainsi améliorer la puissance de détection tout en conservant un contrôle global des erreurs. Les codes développés sont intégrés dans la bibliothèque logicielle du consortium MUSE afin d'être utilisables par l'ensemble de la communauté. De plus, si ces travaux sont particulièrement adaptés aux données MUSE, ils peuvent être étendus à d'autres applications dans les domaines de la séparation de sources et de la détection de sources faibles et étendues. / This work takes place in the context of the study of hyperspectral deep fields produced by the European 3D spectrograph MUSE. These fields allow to explore the young remote Universe and to study the physical and chemical properties of the first galactical and extra-galactical structures.The first part of the thesis deals with the estimation of a spectral signature for each galaxy. As MUSE is a terrestrial instrument, the atmospheric turbulences strongly degrades the spatial resolution power of the instrument thus generating spectral mixing of multiple sources. To remove this issue, data fusion approaches, based on a linear mixing model and complementary data from the Hubble Space Telescope are proposed, allowing the spectral separation of the sources.The second goal of this thesis is to detect the Circum-Galactic Medium (CGM). This CGM, which is formed of clouds of gas surrounding some galaxies, is characterized by a spatially extended faint spectral signature. To detect this kind of signal, an hypothesis testing approach is proposed, based on a max-test strategy on a dictionary. The test statistics is learned on the data. This method is then extended to better take into account the spatial structure of the targets, thus improving the detection power, while still ensuring global error control.All these developments are integrated in the software library of the MUSE consortium in order to be used by the astrophysical community.Moreover, these works can easily be extended beyond MUSE data to other application fields that need faint extended source detection and source separation methods.
268

Etude de méthodes de fusion de données multi-capteurs pour le diagnostic et la classification de situations complexes. Application au développement d'un dispositif intégré pour la détection de la chute des personnes âgées / Research in multi sensors data fusion in order to diagnostic and to categorize complex situations. The goal of this study will be to develop an integrated system to detect fall of the elderly people.

Poujaud, Julien 20 June 2012 (has links)
Le nombre de Seniors de plus de 65 ans atteindra à l'horizon 2050, près de 22% de la population mondiale. Vieillir est une chance, mais cela entraîne chez de nombreuses personnes âgées une perte d'autonomie. Cette dépendance nécessite une aide, parfois permanente, de la famille, de personnels de santé voire dans certains cas une admission en institution. Malheureusement, celle-ci n'est et ne sera pas suffisante pour permettre à nos Seniors de finir leur vie dans le respect de la dignité humaine. Une aide technologique peut-être apportée par l'utilisation de systèmes de détections automatiques de situations critiques. Bien sûr, ils n'auront pas vocation de remplacer les aidants, mais au contraire de soulager leur intervention. Cette thèse a pour but de développer un dispositif intégré répondant à cette demande. Après une recherche approfondie des situations critiques des personnes âgées à leur domicile, un état de l'art des systèmes existants est réalisé. Ceci donne lieu à la conception d'un système multi capteurs de diagnostic et de classification de situations complexes. Ce dernier s'appuie sur différents capteurs non invasifs placés dans l'habitat et sur la personne. Les données collectées permettent par l'intermédiaire d'un algorithme de fusion, de classifier l'activité de la personne. Dans le cas de situations critiques, le système informe automatiquement les secours. Le dispositif développé a fait l'objet d'une validation par l'intermédiare de tests fonctionnels et d'expérimentations en laboratoire. / In 2050, the elderly population over 65-years-old, will represent about 20% of the world's population. Getting older is an opportunity, but unfortunately it also makes people dependent. This dependence requires help, sometimes permanent, from relatives, health professionals and in the worst case may cause a placement of the elderly in a nursing home. Unfortunately, this kind of help is not, and will not be, sufficient to allow every elderly person to live the rest of their life in the respect of human dignity. A potential technological support can be found with automatic detection systems which help detect critical situations. Of course, this kind of system will not replace human help, but only support them. The goal of this thesis is to develop an integrated systemwhich can meet these expectations. After a review of the critical situations of the elderly living independently at home, a bibliography of the existing systems of detection is done. This analysis will help to design a multi sensor analysis and classification system of critical situations detection. The latter is based on different kinds of non invasive sensors located in the homes of the elderly. Experimental data allows to classifying the activity of the elderly thanks to a data fusion algorithm. In case of a critical situation, the alarmsystem will automatically alert the emergency platform. This system was also tested thanks to functional and laboratory experiments.
269

Fusão de dados multinível para sistemas de internet das coisas em agricultura inteligente

Torres, Andrei Bosco Bezerra 11 July 2017 (has links)
TORRES, A. B. B. Fusão de dados multinível para sistemas de internet das coisas em agricultura inteligente. 2017. 71 f. Dissertação (Mestrado em Engenharia de Teleinformática)–Centro de Tecnologia, Universidade Federal do Ceará, Fortaleza, 2017. / Submitted by Renato Vasconcelos (ppgeti@ufc.br) on 2017-09-09T02:35:28Z No. of bitstreams: 1 2017_dis_abbtorres.pdf: 4603894 bytes, checksum: 67f76d807aebb885f45a8f8bfd33f83a (MD5) / Rejected by Marlene Sousa (mmarlene@ufc.br), reason: Prezado Andrei, Existe uma orientação para que normalizemos as dissertações e teses da UFC, em suas paginas pré-textuais e lista de referencias, pelas regras da ABNT. Por esse motivo, sugerimos consultar o modelo de template, para ajudá-lo nesta tarefa, disponível em: http://www.biblioteca.ufc.br/educacao-de-usuarios/templates/ Vamos agora as correções sempre de acordo com o template: 1. A ordem da hierarquia institucional é nome da instituição, nome do CENTRO, nome do DEPARTAMENTO e nome do PROGRAMA DE PÓS-GRADUAÇÃO (sem siglas). 2. A pesar de não ser obrigatório, sugerimos colocar data da defesa na folha de aprovação. 3. Na lista de referencias os subtítulos não ficam em negrito. Quando as correções forem feitas enviaremos o nada consta por e-mail. Att. Marlene Rocha mmarlene2ufc.br on 2017-09-11T14:03:36Z (GMT) / Submitted by Renato Vasconcelos (ppgeti@ufc.br) on 2017-09-11T15:23:18Z No. of bitstreams: 1 2017_dis_abbtorres.pdf: 4603863 bytes, checksum: a65fcc425e4da3674e800515d8401764 (MD5) / Rejected by Marlene Sousa (mmarlene@ufc.br), reason: Prezado Andrei, Falta ainda corrigir a ordem da hierarquia institucional que deve ser assim : UNIVERSIDADE FEDERAL DO CEARÁ, CENTRO DE TECNOLOGIA, DEPARTAMENTO DE ENGENHARIA DE TELEINFORMÁTICA, PROGRAMA DE PÓS-GRADUAÇÃO EM ENGENHARIA DE TELEINFORMÁTICA. Não precisa colocar que é mestrado acadêmico na capa. on 2017-09-11T17:14:05Z (GMT) / Submitted by Renato Vasconcelos (ppgeti@ufc.br) on 2017-09-11T19:02:13Z No. of bitstreams: 0 / Rejected by Marlene Sousa (mmarlene@ufc.br), reason: Renato, não veio o arquivo anexado. Marlene on 2017-09-12T10:58:09Z (GMT) / Submitted by Renato Vasconcelos (ppgeti@ufc.br) on 2017-09-12T13:28:01Z No. of bitstreams: 1 2017_dis_abbtorres.pdf: 5480153 bytes, checksum: 9d72dc6e54ed32cc921eafab5b2af34b (MD5) / Approved for entry into archive by Marlene Sousa (mmarlene@ufc.br) on 2017-09-12T14:15:21Z (GMT) No. of bitstreams: 1 2017_dis_abbtorres.pdf: 5480153 bytes, checksum: 9d72dc6e54ed32cc921eafab5b2af34b (MD5) / Made available in DSpace on 2017-09-12T14:15:21Z (GMT). No. of bitstreams: 1 2017_dis_abbtorres.pdf: 5480153 bytes, checksum: 9d72dc6e54ed32cc921eafab5b2af34b (MD5) Previous issue date: 2017-07-11 / The usage of Wireless Sensor Networks (WSN) to detect and monitor phenomena isn’t a new concept, with studies dating back to 1980, but it has gained momentum with the expansion of Internet of Things (IoT), which aims to enable day to day objects to sense, identify and analyze our world. For IoT to be viable, it is necessary for the objects/sensors to be low-cost, and that implies a series of limitations: low battery, low processing and storage capabilities, low accuracy, etc. In this context, data fusion techniques can be used to mitigate some of these limitations and make the adoption of low-cost sensors viable. This dissertation proposes a data fusion architecture for IoT, improving sensor accuracy, detecting events/anomalies (such as sensor failure) and enabling automated decision making. As a case study, experimental cultures of precocious dwarf cashew and coconut trees were monitored. / A utilização de Redes de Sensores Sem Fio (RSSF) para detecção de fenômenos e monitoramento de ambientes não é um conceito novo, com estudos iniciados na década de 1980, mas ele tem ganhado força pela expansão da Internet das Coisas (Internet of Things - IoT), que trata de capacitar os objetos ao nosso redor de sensoriar, identificar e analisar o mundo. Para tornar a IoT viável em larga escala, é necessário que os objetos/sensores sejam de baixo custo, e isso implica uma série de limitações: bateria limitada, baixa capacidade processamento e armazenamento, baixa acurácia, dentre outros. Nesse contexto, técnicas de fusão de dados podem ser utilizadas para mitigar algumas das limitações citadas e viabilizar a adoção de sensores de baixo custo. A proposta desta dissertação é uma arquitetura de fusão de dados multinível para IoT para melhorar a acurácia dos sensores, detectar eventos/anomalias (como a falha de sensores) e possibilitar tomadas de decisões automatizadas. Como estudo de caso, foram realizados experimentos em conjunto com a Embrapa em um projeto de pesquisa de Agricultura de Precisão no monitoramento de cultivos experimentais de coco e de caju anão-precoce.
270

Outils d'aide à l'optimisation des campagnes d'essais non destructifs sur ouvrages en béton armé / Development of new tools for optimizing non-destructive inspection campaigns on reinforced concrete structures

Gomez-Cardenas, Carolina 04 December 2015 (has links)
Les méthodes de contrôle non destructif (CND) sont essentielles pour estimer les propriétés du béton (mécaniques ou physiques) et leur variabilité spatiale. Elles constituent également un outil pertinent pour réduire le budget d'auscultation d'un ouvrage d'art. La démarche proposée est incluse dans un projet ANR (EvaDéOS) dont l'objectif est d'optimiser le suivi des ouvrages de génie civil en mettant en œuvre une maintenance préventive afin de réduire les coûts. Dans le cas du travail de thèse réalisé, pour caractériser au mieux une propriété particulière du béton (ex : résistance mécanique, porosité, degré de Saturation, etc.), avec des méthodes ND sensibles aux mêmes propriétés, il est impératif de développer des outils objectifs permettant de rationaliser une campagne d'essais sur les ouvrages en béton armé. Dans ce but, premièrement, il est proposé un outil d'échantillonnage spatial optimal pour réduire le nombre de points d'auscultation. L'algorithme le plus couramment employé est le recuit simulé spatial (RSS). Cette procédure est régulièrement utilisée dans des applications géostatistiques, et dans d'autres domaines, mais elle est pour l'instant quasiment inexploitée pour des structures de génie civil. Dans le travail de thèse, une optimisation de la méthode d'optimisation de l'échantillonnage spatial (MOES) originale inspirée du RSS et fondée sur la corrélation spatiale a été développée et testée dans le cas d'essais sur site avec deux fonctions objectifs complémentaires : l'erreur de prédiction moyenne et l'erreur sur l'estimation de la variabilité. Cette méthode est décomposée en trois parties. Tout d'abord, la corrélation spatiale des mesures ND est modélisée par un variogramme. Ensuite, la relation entre le nombre de mesures organisées dans une grille régulière et la fonction objectif est déterminée en utilisant une méthode d'interpolation spatiale appelée krigeage. Enfin, on utilise l'algorithme MOES pour minimiser la fonction objectif en changeant les positions d'un nombre réduit de mesures ND et pour obtenir à la fin une grille irrégulière optimale. Des essais destructifs (ED) sont nécessaires pour corroborer les informations obtenues par les mesures ND. En raison du coût ainsi que des dégâts possibles sur la structure, un plan d'échantillonnage optimal afin de prélever un nombre limité de carottes est important. Pour ce faire, une procédure utilisant la fusion des données fondée sur la théorie des possibilités et développée antérieurement, permet d'estimer les propriétés du béton à partir des ND. Par le biais d'un recalage nécessitant des ED réalisés sur carottes, elle est étalonnée. En sachant qu'il y a une incertitude sur le résultat des ED réalisés sur les carottes, il est proposé de prendre en compte cette incertitude et de la propager au travers du recalage sur les résultats des données fusionnées. En propageant ces incertitudes, on obtient des valeurs fusionnées moyennes par point avec un écart-type. On peut donc proposer une méthodologie de positionnement et de minimisation du nombre des carottes nécessaire pour ausculter une structure par deux méthodes : la première, en utilisant le MOES pour les résultats des propriétés sortis de la fusion dans chaque point de mesure et la seconde par la minimisation de l'écart-type moyen sur la totalité des points fusionnés, obtenu après la propagation des incertitudes des ED. Pour finir, afin de proposer une alternative à la théorie des possibilités, les réseaux de neurones sont également testés comme méthodes alternatives pour leur pertinence et leur simplicité d'utilisation. / Non-destructive testing methods (NDT) are essential for estimating concrete properties (mechanical or physical) and their spatial variability. They also constitute an useful tool to reduce the budget auscultation of a structure. The proposed approach is included in an ANR project (EvaDéOS) whose objective is to optimize the monitoring of civil engineering structures by implementing preventive maintenance to reduce diagnosis costs. In this thesis, the objective was to characterize at best a peculiar property of concrete (e.g. mechanical strength, porosity, degree of saturation, etc.), with technical ND sensitive to the same properties. For this aim, it is imperative to develop objective tools that allow to rationalize a test campaign on reinforced concrete structures. For this purpose, first, it is proposed an optimal spatial sampling tool to reduce the number of auscultation points. The most commonly used algorithm is the spatial simulated annealing (SSA). This procedure is regularly used in geostatistical applications, and in other areas, but yet almost unexploited for civil engineering structures. In the thesis work, an original optimizing spatial sampling method (OSSM) inspired in the SSA and based on the spatial correlation was developed and tested in the case of on-site auscultation with two complementary fitness functions: mean prediction error and the error on the estimation of the global variability. This method is divided into three parts. First, the spatial correlation of ND measurements is modeled by a variogram. Then, the relationship between the number of measurements organized in a regular grid and the objective function is determined using a spatial interpolation method called kriging. Finally, the OSSM algorithm is used to minimize the objective function by changing the positions of a smaller number of ND measurements and for obtaining at the end an optimal irregular grid. Destructive testing (DT) are needed to corroborate the information obtained by the ND measurements. Because of the cost and possible damage to the structure, an optimal sampling plan to collect a limited number of cores is important. For this aim, a procedure using data fusion based on the theory of possibilities and previously developed is used to estimate the properties of concrete from the ND. Through a readjustment bias requiring DTs performed on carrots, it is calibrated. Knowing that there is uncertainty about the results of DTs performed on carrots, it is proposed to take into account this uncertainty and propagate it through the calibration on the results of the fused data. By propagating this uncertainty, it is obtained mean fused values with a standard deviation. One can thus provide a methodology for positioning and minimizing the number of cores required to auscultate a structure by two methods: first, using the OSSM for the results of fused properties values in each measuring point and the second by the minimization of the average standard deviation over all of the fused points obtained after the propagation of DTs uncertainties. Finally, in order to propose an alternative to the possibility theory, neural networks are also tested as alternative methods for their relevance and usability.

Page generated in 0.0835 seconds