• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 196
  • 53
  • 21
  • 19
  • 8
  • 7
  • 5
  • 4
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 378
  • 378
  • 96
  • 67
  • 66
  • 64
  • 58
  • 51
  • 50
  • 38
  • 37
  • 37
  • 34
  • 34
  • 33
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
361

Development of Adaptive Computational Algorithms for Manned and Unmanned Flight Safety

Elkin, Colin P. January 2018 (has links)
No description available.
362

INTEGRATING CONNECTED VEHICLE DATA FOR OPERATIONAL DECISION MAKING

Rahul Suryakant Sakhare (9320111) 26 April 2023 (has links)
<p>  </p> <p>Advancements in technology have propelled the availability of enriched and more frequent information about traffic conditions as well as the external factors that impact traffic such as weather, emergency response etc. Most newer vehicles are equipped with sensors that transmit their data back to the original equipment manufacturer (OEM) at near real-time fidelity. A growing number of such connected vehicles (CV) and the advent of third-party data collectors from various OEMs have made big data for traffic commercially available for use. Agencies maintaining and managing surface transportation are presented with opportunities to leverage such big data for efficiency gains. The focus of this dissertation is enhancing the use of CV data and applications derived from fusing it with other datasets to extract meaningful information that will aid agencies in data driven efficient decision making to improve network wide mobility and safety performance.   </p> <p>One of the primary concerns of CV data for agencies is data sampling, particularly during low-volume overnight hours. An evaluation of over 3 billion CV records in May 2022 in Indiana has shown an overall CV penetration rate of 6.3% on interstates and 5.3% on non-interstate roadways. Fusion of CV traffic speeds with precipitation intensity from NOAA’s High-Resolution Rapid-Refresh (HRRR) data over 42 unique rainy days has shown reduction in the average traffic speed by approximately 8.4% during conditions classified as very heavy rain compared to no rain. </p> <p>Both aggregate analysis and disaggregate analysis performed during this study enables agencies and automobile manufacturers to effectively answer the often-asked question of what rain intensity it takes to begin impacting traffic speeds. Proactive measures such as providing advance warnings that improve the situational awareness of motorists and enhance roadway safety should be considered during very heavy rain periods, wind events, and low daylight conditions.</p> <p>Scalable methodologies that can be used to systematically analyze hard braking and speed data were also developed. This study demonstrated both quantitatively and qualitatively how CV data provides an opportunity for near real-time assessment of work zone operations using metrics such as congestion, location-based speed profiles and hard braking. The availability of data across different states and ease of scalability makes the methodology implementable on a state or national basis for tracking any highway work zone with little to no infrastructure investment. These techniques can provide a nationwide opportunity in assessing the current guidelines and giving feedback in updating the design procedures to improve the consistency and safety of construction work zones on a national level.  </p> <p>CV data was also used to evaluate the impact of queue warning trucks sending digital alerts. Hard-braking events were found to decrease by approximately 80% when queue warning trucks were used to alert motorists of impending queues analyzed from 370 hours of queueing with queue trucks present and 58 hours of queueing without the queue trucks present, thus improving work zone safety. </p> <p>Emerging opportunities to identify and measure traffic shock waves and their forming or recovery speed anywhere across a roadway network are provided due to the ubiquity of the CV data providers. A methodology for identifying different shock waves was presented, and among the various case studies found typical backward forming shock wave speeds ranged from 1.75 to 11.76 mph whereas the backward recovery shock wave speeds were between 5.78 to 16.54 mph. The significance of this is illustrated with a case study of  a secondary crash that suggested  accelerating the clearance by 9 minutes could have prevented the secondary crash incident occurring at the back of the queue. Such capability of identifying and measuring shock wave speeds can be utilized by various stakeholders for traffic management decision-making that provide a holistic perspective on the importance of both on scene risk as well as the risk at the back of the queue. Near real-time estimation of shock waves using CV data can recommend travel time prediction models and serve as input variables to navigation systems to identify alternate route choice opportunities ahead of a driver’s time of arrival.   </p> <p>The overall contribution of this thesis is developing scalable methodologies and evaluation techniques to extract valuable information from CV data that aids agencies in operational decision making.</p>
363

Data Fusion of Infrared, Radar, and Acoustics Based Monitoring System

Mirzaei, Golrokh 22 July 2014 (has links)
No description available.
364

Image and RADAR fusion for autonomous vehicles / Bild och RADAR för autonoma fordon

de Gibert Duart, Xavier January 2023 (has links)
Robust detection, localization, and tracking of objects are essential for autonomous driving. Computer vision has largely driven development based on camera sensors in recent years, but 3D localization from images is still challenging. Sensors such as LiDAR or RADAR are used to compute depth; each having its own advantages and drawbacks. The main idea of the project is to be able to mix images from the camera and RADAR detections in order to estimate depths for the objects appearing in the images. Fusion strategies can be considered the solution to give a more detailed description of the environment by utilizing both the 3D localization capabilities of range sensors and the higher spatial resolution of image data. The idea is to fuse 3D detections from the RADAR onto the image plane, this requires a high level of synchronization of the sensors and projections of the RADAR data on the required image. / Robust detektering, lokalisering och spårning av objekt är avgörande för autonom körning. Datorseende har till stor del drivit utvecklingen baserad på kamerasensorer de senaste åren, men 3D-lokalisering från bilder är fortfarande utmanande. Sensorer som LiDAR eller RADAR används för att beräkna djup; var och en har sina egna fördelar och nackdelar. Huvudtanken med projektet är att kunna blanda bilder från kameran och RADAR-detektioner för att uppskatta djup för de objekt som förekommer i bilderna. Fusionsstrategier kan anses vara lösningen för att ge en mer detaljerad beskrivning av miljön med både 3D-lokaliseringsförmågan hos avståndssensorer och den högre rumsliga upplösningen av bilddata. Tanken är att smälta samman 3D-detektioner från RADAR till bildplanet, detta kräver en hög nivå av synkronisering av sensorerna och projektioner av RADAR-data på den önskade bilden.
365

Automated evaluation of three dimensional ultrasonic datasets / Évaluation automatique de données ultrasonores en 3D

Osman, Ahmad 14 June 2013 (has links)
Le contrôle non destructif est devenu nécessaire pour assurer la qualité des matériaux et des composants soit en service ou à l'étape de la production. Ceci nécessite l'utilisation d'une technique d’inspection rapide, robuste et fiable. En tant que technique de contrôle principale, la technologie des ultrasons a des capacités uniques pour évaluer la position, la taille et la forme des discontinuités. Ces informations ont un rôle essentiel dans les critères d'acceptation qui sont fondés sur la sécurité et les exigences de qualité des composants fabriqués. Par conséquent, un usage intensif de la technique des ultrasons apparaît notamment dans l'inspection des composites fabriqués à grande échelle dans l'industrie aérospatiale. D'importants progrès techniques ont contribué à l'optimisation des techniques d'acquisition par ultrasons telles que la technique de "Sampling Phased Array". Cependant, les systèmes d'acquisition doivent être complétés par une procédure d'analyse automatisée de données afin d'éviter l'interprétation manuelle fastidieuse de toutes les données produites. Un tel complément permet d'accélérer le processus d'inspection et d'améliorer sa fiabilité. L'objectif de cette thèse est de proposer une chaîne d’analyse dédiée au traitement automatique des volumes échographiques 3D obtenus en utilisant la technique Sampling Phased Array. Tout d'abord, une étude détaillée du bruit de speckle affectant les données échographiques a été effectuée, puisque ce type de bruit réduit la qualité des données échographiques. Ensuite, une chaîne d’analyse complète a été développée, constituée d'une procédure de segmentation suivie d'un processus de classification. La méthodologie de segmentation proposée est adaptée aux données ultrasonores 3D et a pour objectif de détecter tous les défauts potentiels à l'intérieur du volume d'entrée 3D. La procédure de segmentation étant en priorité dédiée à la détection des défauts qui est vitale, une difficulté principale est le taux élevé de fausses alarmes qui peuvent être détectées également. La classification correcte des fausses alarmes est nécessaire afin de réduire le taux de rejet des pièces saines. Cela doit être fait sans risquer la perte des vrais défauts. Par conséquent, la segmentation doit être suivie d'un processus de classification efficace qui doit distinguer les défauts réels des fausses alarmes. Ceci a été réalisé en utilisant une approche de classification spécifique basée sur une approche de fusion de données. La chaîne complète d'analyse a été testée sur plusieurs mesures ultrasonores volumiques de composites plastiques à renfort fibre de carbone. Les résultats expérimentaux de la chaîne ont révélé une grande précision ainsi qu'une très bonne fiabilité de détection, de caractérisation et de classification des défauts avec un taux très faible de fausses alarmes. / Non-destructive testing has become necessary to ensure the quality of materials and components either in-service or at the production stage. This requires the use of a rapid, robust and reliable testing technique. As a main testing technique, the ultrasound technology has unique abilities to assess the discontinuity location, size and shape. Such information play a vital role in the acceptance criteria which are based on safety and quality requirements of manufactured components. Consequently, an extensive usage of the ultrasound technique is perceived especially in the inspection of large scale composites manufactured in the aerospace industry. Significant technical advances have contributed into optimizing the ultrasound acquisition techniques such as the sampling phased array technique. However, acquisition systems need to be complemented with an automated data analysis procedure to avoid the time consuming manual interpretation of all produced data. Such a complement would accelerate the inspection process and improve its reliability. The objective of this thesis is to propose an analysis chain dedicated to automatically process the 3D ultrasound volumes obtained using the sampling phased array technique. First, a detailed study of the speckle noise affecting the ultrasound data was conducted, as speckle reduces the quality of ultrasound data. Afterward, an analysis chain was developed, composed of a segmentation procedure followed by a classification procedure. The proposed segmentation methodology is adapted for ultrasound 3D data and has the objective to detect all potential defects inside the input volume. While the detection of defects is vital, one main difficulty is the high amount of false alarms which are detected by the segmentation procedure. The correct distinction of false alarms is necessary to reduce the rejection ratio of safe parts. This has to be done without risking missing true defects. Therefore, there is a need for a powerful classifier which can efficiently distinguish true defects from false alarms. This is achieved using a specific classification approach based on data fusion theory. The chain was tested on several ultrasound volumetric measures of Carbon Fiber Reinforced Polymers components. Experimental results of the chain revealed high accuracy, reliability in detecting, characterizing and classifying defects.
366

Data Fusion for Multi-Sensor Nondestructive Detection of Surface Cracks in Ferromagnetic Materials

Heideklang, René 28 November 2018 (has links)
Ermüdungsrissbildung ist ein gefährliches und kostenintensives Phänomen, welches frühzeitig erkannt werden muss. Weil kleine Fehlstellen jedoch hohe Testempfindlichkeit erfordern, wird die Prüfzuverlässigkeit durch Falschanzeigen vermindert. Diese Arbeit macht sich deshalb die Diversität unterschiedlicher zerstörungsfreier Oberflächenprüfmethoden zu Nutze, um mittels Datenfusion die Zuverlässigkeit der Fehlererkennung zu erhöhen. Der erste Beitrag dieser Arbeit in neuartigen Ansätzen zur Fusion von Prüfbildern. Diese werden durch Oberflächenabtastung mittels Wirbelstromprüfung, thermischer Prüfung und magnetischer Streuflussprüfung gewonnen. Die Ergebnisse zeigen, dass schon einfache algebraische Fusionsregeln gute Ergebnisse liefern, sofern die Daten adäquat vorverarbeitet wurden. So übertrifft Datenfusion den besten Einzelsensor in der pixelbasierten Falscherkennungsrate um den Faktor sechs bei einer Nutentiefe von 10 μm. Weiterhin wird die Fusion im Bildtransformationsbereich untersucht. Jedoch werden die theoretischen Vorteile solcher richtungsempfindlichen Transformationen in der Praxis mit den vorliegenden Daten nicht erreicht. Nichtsdestotrotz wird der Vorteil der Fusion gegenüber Einzelsensorprüfung auch hier bestätigt. Darüber hinaus liefert diese Arbeit neuartige Techniken zur Fusion auch auf höheren Ebenen der Signalabstraktion. Ein Ansatz, der auf Kerndichtefunktionen beruht, wird eingeführt, um örtlich verteilte Detektionshypothesen zu integrieren. Er ermöglicht, die praktisch unvermeidbaren Registrierungsfehler explizit zu modellieren. Oberflächenunstetigkeiten von 30 μm Tiefe können zuverlässig durch Fusion gefunden werden, wogegen das beste Einzelverfahren erst Tiefen ab 40–50 μm erfolgreich auffindet. Das Experiment wird auf einem zweiten Prüfkörper bestätigt. Am Ende der Arbeit werden Richtlinien für den Einsatz von Datenfusion gegeben, und die Notwendigkeit einer Initiative zum Teilen von Messdaten wird betont, um zukünftige Forschung zu fördern. / Fatigue cracking is a dangerous and cost-intensive phenomenon that requires early detection. But at high test sensitivity, the abundance of false indications limits the reliability of conventional materials testing. This thesis exploits the diversity of physical principles that different nondestructive surface inspection methods offer, by applying data fusion techniques to increase the reliability of defect detection. The first main contribution are novel approaches for the fusion of NDT images. These surface scans are obtained from state-of-the-art inspection procedures in Eddy Current Testing, Thermal Testing and Magnetic Flux Leakage Testing. The implemented image fusion strategy demonstrates that simple algebraic fusion rules are sufficient for high performance, given adequate signal normalization. Data fusion reduces the rate of false positives is reduced by a factor of six over the best individual sensor at a 10 μm deep groove. Moreover, the utility of state-of-the-art image representations, like the Shearlet domain, are explored. However, the theoretical advantages of such directional transforms are not attained in practice with the given data. Nevertheless, the benefit of fusion over single-sensor inspection is confirmed a second time. Furthermore, this work proposes novel techniques for fusion at a high level of signal abstraction. A kernel-based approach is introduced to integrate spatially scattered detection hypotheses. This method explicitly deals with registration errors that are unavoidable in practice. Surface discontinuities as shallow as 30 μm are reliably found by fusion, whereas the best individual sensor requires depths of 40–50 μm for successful detection. The experiment is replicated on a similar second test specimen. Practical guidelines are given at the end of the thesis, and the need for a data sharing initiative is stressed to promote future research on this topic.
367

Algoritmos de inteligência computacional em instrumentação: uso de fusão de dados na avaliação de amostras biológicas e químicas / Computational intelligence algorithms for instrumentation: biological and chemical samples evaluation by using data fusion

Negri, Lucas Hermann 24 February 2012 (has links)
Made available in DSpace on 2016-12-12T20:27:37Z (GMT). No. of bitstreams: 1 LUCAS HERMANN NEGRI.pdf: 2286573 bytes, checksum: 5c0e3c77c1d910bd47dd444753c142c4 (MD5) Previous issue date: 2012-02-24 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / This work presents computational methods to process data from electrical impedance spectroscopy and fiber Bragg grating interrogation in order to characterize the evaluated samples. Estimation and classification systems were developed, by using the signals isolatedly or simultaneously. A new method to adjust the parameters of functions that describes the electrical impedance spectra by using particle swarm optimization is proposed. Such method were also extended to correct distorted spectra. A benchmark for peak detection algorithms in fiber Bragg grating interrogation was performed, including the currently used algorithms as obtained from literature, where the accuracy, precision, and computational performance were evaluated. This comparative study was performed with both simulated and experimental data. It was perceived that there is no optimal algorithm when all aspects are taken into account, but it is possible to choose a suitable algorithm when one has the application requirements. A novel peak detection algorithm based on an artificial neural network is proposed, being recommended when the analyzed spectra have distortions or is not symmetrical. Artificial neural networks and support vector machines were employed with the data processing algorithms to classify or estimate sample characteristics in experiments with bovine meat, milk, and automotive fuel. The results have shown that the proposed data processing methods are useful to extract the data main information and that the employed data fusion schemes were useful, in its initial classification and estimation objectives. / Neste trabalho são apresentados métodos computacionais para o processamento de dados produzidos em sistemas de espectroscopia de impedância elétrica e sensoriamento a redes de Bragg em fibra óptica com o objetivo de inferir características das amostras analisadas. Sistemas de estimação e classificação foram desenvolvidos, utilizando os sinais isoladamente ou de forma conjunta com o objetivo de melhorar as respostas dos sistemas. Propõe-se o ajuste dos parâmetros de funções que modelam espectros de impedância elétrica por meio de um novo algoritmo de otimização por enxame de partículas, incluindo a sua utilização na correção de espectros com determinadas distorções. Um estudo comparativo foi realizado entre os métodos correntes utilizados na detecção de pico de sinais resultantes de sensores em fibras ópticas, onde avaliou-se a exatidão, precisão e desempenho computacional. Esta comparação foi feita utilizando dados simulados e experimentais, onde percebeu-se que não há algoritmo simultaneamente superior em todos os aspectos avaliados, mas que é possível escolher o ideal quando se têm os requisitos da aplicação. Um método de detecção de pico por meio de uma rede neural artificial foi proposto, sendo recomendado em situações onde o espectro analisado possui distorções ou não é simétrico. Redes neurais artificiais e máquinas de vetor de suporte foram utilizadas em conjunto com os algoritmos de processamento com o objetivo de classificar ou estimar alguma característica de amostras em experimentos que envolveram carnes bovinas, leite bovino e misturas de combustível automotivo. Mostra-se neste trabalho que os métodos de processamento propostos são úteis para a extração das características importantes dos dados e que os esquemas utilizados para a fusão destes dados foram úteis dentro dos seus objetivos iniciais de classificação e estimação.
368

Fusion de données géoréférencées et développement de services interopérables pour l’estimation des besoins en eau à l’échelle des bassins versants / Geospatial data fusion and development of interoperable services to assess water needs at watershed scale

Beaufils, Mickaël 04 December 2012 (has links)
De nos jours, la préservation de l’environnement constitue un enjeu prioritaire. La compréhension des phénomènes environnementaux passe par l’étude et la combinaison d’un nombre croissant de données hétérogènes. De nombreuses initiatives internationales (INSPIRE, GEOSS) visent à encourager le partage et l’échange de ces données. Dans ce sujet de recherche, nous traitons de l’intérêt de mettre à disposition des modèles scientifiques sur le web. Nous montrons l’intérêt d’utiliser des applications s’appuyant sur des données géoréférencées et présentons des méthodes et des moyens répondant aux exigences d’interopérabilité. Nous illustrons notre approche par l’implémentation de modèles d’estimation des besoins en eau agricoles et domestiques fonctionnant à diverses échelles spatiales et temporelles. Un prototype basé sur une architecture entièrement orientée services web a été développé. L’outil s’appuie sur les standards Web Feature Service (WFS), Sensor Observation Service (SOS) et Web Processing Service (WPS) de l’OGC. Enfin, la prise en compte des imperfections des données est également abordée avec l’intégration de méthodes d’analyse de sensibilité et de propagation de l’incertitude. / Nowadays, preservation of the environment is a main priority. Understanding of environmental phenomena requires the study and the combination of an increasing number of heterogeneous data. Several international initiatives (INSPIRE, GEOSS) aims to encourage the sharing and exchange of those data.In this thesis, the interest of making scientific models available on the web is discussed. The value of using applications based on geospatial data is demonstrated. Several methods and means that satisfy the requirements of interoperability are also purposed.Our approach is illustrated by the implementation of models for estimating agricultural and domestic water requirements. Those models can be used at different spatial scales and temporal granularities. A prototype based on a complete web service oriented architecture was developed. The tool is based on the OGC standards Web Feature Service (WFS), Sensor Observation Service (SOS) and Web Processing Service (WPS).Finally, taking into account the imperfections of the data is also discussed with the integration of methods for sensitivity analysis and uncertainty propagation.
369

Fusion of Sparse Reconstruction Algorithms in Compressed Sensing

Ambat, Sooraj K January 2015 (has links) (PDF)
Compressed Sensing (CS) is a new paradigm in signal processing which exploits the sparse or compressible nature of the signal to significantly reduce the number of measurements, without compromising on the signal reconstruction quality. Recently, many algorithms have been reported in the literature for efficient sparse signal reconstruction. Nevertheless, it is well known that the performance of any sparse reconstruction algorithm depends on many parameters like number of measurements, dimension of the sparse signal, the level of sparsity, the measurement noise power, and the underlying statistical distribution of the non-zero elements of the signal. It has been observed that a satisfactory performance of the sparse reconstruction algorithm mandates certain requirement on these parameters, which is different for different algorithms. Many applications are unlikely to fulfil this requirement. For example, imaging speed is crucial in many Magnetic Resonance Imaging (MRI) applications. This restricts the number of measurements, which in turn affects the medical diagnosis using MRI. Hence, any strategy to improve the signal reconstruction in such adverse scenario is of substantial interest in CS. Interestingly, it can be observed that the performance degradation of the sparse recovery algorithms in the aforementioned cases does not always imply a complete failure. That is, even in such adverse situations, a sparse reconstruction algorithm may provide partially correct information about the signal. In this thesis, we study this scenario and propose a novel fusion framework and an iterative framework which exploit the partial information available in the sparse signal estimate(s) to improve sparse signal reconstruction. The proposed fusion framework employs multiple sparse reconstruction algorithms, independently, for signal reconstruction. We first propose a fusion algorithm viz. FACS which fuses the estimates of multiple participating algorithms in order to improve the sparse signal reconstruction. To alleviate the inherent drawbacks of FACS and further improve the sparse signal reconstruction, we propose another fusion algorithm called CoMACS and variants of CoMACS. For low latency applications, we propose a latency friendly fusion algorithm called pFACS. We also extend the fusion framework to the MMV problem and propose the extension of FACS called MMV-FACS. We theoretically analyse the proposed fusion algorithms and derive guarantees for performance improvement. We also show that the proposed fusion algorithms are robust against both signal and measurement perturbations. Further, we demonstrate the efficacy of the proposed algorithms via numerical experiments: (i) using sparse signals with different statistical distributions in noise-free and noisy scenarios, and (ii) using real-world ECG signals. The extensive numerical experiments show that, for a judicious choice of the participating algorithms, the proposed fusion algorithms result in a sparse signal estimate which is often better than the sparse signal estimate of the best participating algorithm. The proposed fusion framework requires toemploy multiple sparse reconstruction algorithms for sparse signal reconstruction. We also propose an iterative framework and algorithm called {IFSRA to improve the performance of a given arbitrary sparse reconstruction algorithm. We theoretically analyse IFSRA and derive convergence guarantees under signal and measurement perturbations. Numerical experiments on synthetic and real-world data confirm the efficacy of IFSRA. The proposed fusion algorithms and IFSRA are general in nature and does not require any modification in the participating algorithm(s).
370

[en] DATA FUSION OF TIME OF FLIGHT TECHNIQUES USING ULTRASONIC TRANSDUCERS FOR WIND SPEED MEASUREMENT / [pt] FUSÃO DE DADOS DAS TÉCNICAS DE TEMPO DE TRÂNSITO UTILIZANDO TRANSDUTORES ULTRA-SÔNICOS PARA MEDIÇÃO DA VELOCIDADE DO VENTO

JUAN MOISES MAURICIO VILLANUEVA 10 January 2018 (has links)
[pt] A medição da velocidade de fluidos tem relevância considerável em aplicações industriais e científicas, nas quais medições com baixa incerteza são geralmente requeridas. Nesta tese, tem-se como objetivo projetar e modelar um sistema de medição de velocidade de vento utilizando fusão de dados das informações dos tempos de trânsito obtidas pelas técnicas de detecção de limiar e diferença de fase. Para este propósito, este trabalho é composto por duas partes principais. Na primeira parte, apresenta-se uma análise da propagação de incertezas das técnicas de detecção de limiar e diferença de fase considerando duas estruturas para a medição da velocidade do vento, e faz-se a comparação das faixas de medição e suas incertezas associadas para cada estrutura de medição. Na segunda parte deste trabalho, faz-se um estudo das técnicas de fusão de dados aplicadas a instrumentação e medição, identificandose duas técnicas principais baseadas em: (a) estimação de máxima probabilidade (MLE – Maximum Likelihood Estimation), (b) relação de compatibilidade fuzzy e operadores OWA (Order Weighted Average) com agregação parcial. Em seguida, estas técnicas de fusão são aplicadas para a estimação do tempo de trânsito, considerando-se várias medições independentes do tempo de trânsito obtidas pelas técnicas de detecção de limiar e diferença de fase. Finalmente, realiza-se uma análise da incerteza quantificando-se a incerteza de cada medição sobre o resultado final de fusão. Apresenta-se um estudo de caso englobando estas duas partes do trabalho, desenvolvendo-se o projeto e modelagem de um instrumento de medição de velocidade do vento com baixa incerteza, considerando-se as incertezas associadas, e o uso de técnicas adequadas de fusão de dados para prover informações com maior exatidão e confiabilidade. Resultados experimentais são realizados em um túnel de vento de baixa velocidade com o objetivo de verificar a consistência dos estudos teóricos apresentados. / [en] Flow speed measurement has considerable relevance in industrial and scientific applications, where measurements with low uncertainty are required. In this work, a system for wind speed measurement using ultrasonic transducers is designed and modelled. This system makes use of data fusion techniques for the time-of-flight estimation, combining independent information provided by the threshold detection and phase difference methods. For this purpose, this work consists of two main parts. The first part presents an analysis of uncertainty and error propagation concerning the threshold detection and phase difference techniques and considering two structures for the wind speed measurement. Measurement ranges are associated uncertainties are then compared for each of those estrutures. In the second part of this work, data fusion techniques applied to instrumentation and measurement are studied; two main techniques are singled out: (a) Maximum Likelihood Estimation (MLE), (b) Fuzzy compatibility relation and Order Weighted Average (OWA) operators with partial aggregation. These fusion techniques are then applied to the time-of-flight estimation, by considering several independent measurements obtained through the threshold detection and phase difference techniques. Finally, uncertainty analysis is carried out by quantifying the influence of each independent measurement on the global fusion result. A case study is also presented, where an instrument for wind speed measurements with low uncertainty is designed and modelled. Appropriate techniques of data fusion aimed at improving accuracy and realiability are considered. Experiments are performed in a wind tunnel in order to verify the consistency of the results in view of the theoretical studies.

Page generated in 0.0528 seconds