• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 11
  • 6
  • 2
  • 2
  • 1
  • Tagged with
  • 27
  • 27
  • 6
  • 6
  • 6
  • 6
  • 6
  • 6
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Navegação autônoma de robôs móveis e detecção de intrusos em ambientes internos utilizando sensores 2D e 3D / Autonomous navigation of mobile robots and indoor intruders detection using 2D and 3D sensors

Diogo Santos Ortiz Correa 13 June 2013 (has links)
Os robôs móveis e de serviço vêm assumindo um papel cada vez mais amplo e importante junto à sociedade moderna. Um tipo importante de robô móvel autônomo são os robôs voltados para a vigilância e segurança em ambientes internos (indoor). Estes robôs móveis de vigilância permitem a execução de tarefas repetitivas de monitoramento de ambientes, as quais podem inclusive apresentar riscos à integridade física das pessoas, podendo assim ser executadas de modo autônomo e seguro pelo robô. Este trabalho teve por objetivo o desenvolvimento dos principais módulos que compõem a arquitetura de um sistema robótico de vigilância, que incluem notadamente: (i) a aplicação de sensores com percepção 3D (Kinect) e térmica (Câmera FLIR), de relativo baixo custo, junto a este sistema robótico; (ii) a detecção de intrusos (pessoas) através do uso conjunto dos sensores 3D e térmico; (iii) a navegação de robôs móveis autônomos com detecção e desvio de obstáculos, para a execução de tarefas de monitoramento e vigilância de ambientes internos; (iv) a identificação e reconhecimento de elementos do ambiente que permitem ao robô realizar uma navegação baseada em mapas topológicos. Foram utilizados métodos de visão computacional, processamento de imagens e inteligência computacional para a realização das tarefas de vigilância. O sensor de distância Kinect foi utilizado na percepção do sistema robótico, permitindo a navegação, desvio de obstáculos, e a identificação da posição do robô em relação a um mapa topológico utilizado. Para a tarefa de detecção de pessoas no ambiente foram utilizados os sensores Kinect e câmera térmica FLIR, integrando os dados fornecidos por ambos sensores, e assim, permitindo obter uma melhor percepção do ambiente e também permitindo uma maior confiabilidade na detecção de pessoas. Como principal resultado deste trabalho foi desenvolvido um iii sistema, capaz de navegar com o uso de um mapa topológico global, capaz de se deslocar em um ambiente interno evitando colisões, e capaz de detectar a presença de seres humanos (intrusos) no ambiente. O sistema proposto foi testado em situações reais com o uso de um robô móvel Pioneer P3AT equipado com os sensores Kinect e com uma Câmera FLIR, realizando as tarefas de navegação definidas com sucesso. Outras funcionalidades foram implementadas, como o acompanhamento da pessoa (follow me) e o reconhecimento de comandos gestuais, onde a integração destes módulos com o sistema desenvolvido constituem-se de trabalhos futuros propostos / Mobile robots and service robots are increasing their applications and importance in our modern society. An important type of autonomous mobile robot application is indoor monitoring and surveillance tasks. The adoption of mobile robots for indoor surveillance tasks allows the execution of repetitive environment patrolling, which may even pose risks to the physical integrity of persons. Thus these activities can be autonomously and safely performed by security robots. This work aimed at the development of key modules and components that integrates the general architecture of a surveillance robotic system, including: (i) the development and application of a 3D perception sensor (Kinect) and a thermal sensor (FLIR camera), representing a relatively low-cost solution for mobile robot platforms; (ii) the intruder detection (people) in the environment, through the joint use of 3D and thermal sensors; (iii) the autonomous navigation of mobile robots within obstacle detection and avoidance, performing the monitoring and surveillance tasks of indoor environments; (iv) the identification and recognition of environmental features that allow the robot to perform a navigation based on topological maps. We used methods from Computer Vision, Image Processing and Computational Intelligence to carry out the implementation of the mobile robot surveillance modules. The proximity and distance measurement sensor adopted in the robotic perception system was the Kinect, allowing navigation, obstacle avoidance, and identifying key positions of the robot with respect to a topological map. For the intruder detection task we used a Kinect sensor together with a FLIR thermal camera, integrating the data obtained from both sensors, and thus allowing a better understanding of the environment, and also allowing a greater reliability in people detection. As a main result of this work, it has been v developed a system capable of navigating using a global topological map, capable of moving itself autonomously into an indoor environment avoiding collisions, and capable of detect the presence of humans (intruders) into the environment. The proposed system has been tested in real situations with the use of a Pioneer P3AT mobile robot equipped with Kinect and FLIR camera sensors, performing successfully the defined navigation tasks. Other features have also been implemented, such as following a person and recognizing gestures, proposed as future works to be integrated into the developed system
12

Development of Flight-Test Performance Estimation Techniques for Small Unmanned Aerial Systems

McCrink, Matthew H. January 2015 (has links)
No description available.
13

On wide dynamic range logarithmic CMOS image sensors

Choubey, Bhaskar January 2006 (has links)
Logarithmic sensors are capable of capturing the wide dynamic range of intensities available in nature with minimum number of bits and post-processing required. A simple circuit able to perform logarithmic capture is one utilising a MOS device in weak inversion. However, the output of this pixel is crippled due to fixed pattern noise. Technique proposed to reduce this noise fail to produce high quality images on account of unaccounted high gain variations in the pixel. An electronic calibration technique is proposed which is capable of reducing both multiplicative as well as additive FPN. Contrast properties matching that of human eye are reported from these sensors. With reduced FPN, the pixel performance at low intensities becomes concerning. In these regions, the high leakage current of the CMOS process affects the logarithmic pixel. To reduce this current, two different techniques using a modified circuit and another with modified layout are tested. The layout technique is observed to reduce the leakage current. In addition, this layout can be used to linearise the output of logarithmic pixel in low light regions. The unique linear response at low light and logarithmic pixel at high light is further investigated. A new model based on the device physics is derived to represent this response. The fixed pattern noise profile is also investigated. An intelligent iterative scheme is proposed and verified to extract the photocurrent flowing in the pixel and correct the fixed pattern noise utilising the new model. Future research ideas leading to better designs of logarithmic pixels and post-processing of these signals are proposed at the end of the thesis.
14

Detecção de obstáculos usando fusão de dados de percepção 3D e radar em veículos automotivos / Obstacle detection using 3D perception and radar data fusion in automotive vehicles

Rosero, Luis Alberto Rosero 30 January 2017 (has links)
Este projeto de mestrado visa a pesquisa e o desenvolvimento de métodos e algoritmos, relacionados ao uso de radares, visão computacional, calibração e fusão de sensores em veículos autônomos/inteligentes para fazer a detecção de obstáculos. O processo de detecção de obstáculos se divide em três etapas, a primeira é a leitura de sinais de Radar, do LiDAR e a captura de dados da câmera estéreo devidamente calibrados, a segunda etapa é a fusão de dados obtidos na etapa anterior (Radar+câmera, Radar+LIDAR 3D), a terceira etapa é a extração de características das informações obtidas, identificando e diferenciando o plano de suporte (chão) dos obstáculos, e finalmente realizando a detecção dos obstáculos resultantes da fusão dos dados. Assim é possível diferenciar os diversos tipos de elementos identificados pelo Radar e que são confirmados e unidos aos dados obtidos por visão computacional ou LIDAR (nuvens de pontos), obtendo uma descrição mais precisa do contorno, formato, tamanho e posicionamento destes. Na tarefa de detecção é importante localizar e segmentar os obstáculos para posteriormente tomar decisões referentes ao controle do veículo autônomo/inteligente. É importante destacar que o Radar opera em condições adversas (pouca ou nenhuma iluminação, com poeira ou neblina), porém permite obter apenas pontos isolados representando os obstáculos (esparsos). Por outro lado, a câmera estéreo e o LIDAR 3D permitem definir os contornos dos objetos representando mais adequadamente seu volume, porém no caso da câmera esta é mais suscetível a variações na iluminação e a condições restritas ambientais e de visibilidade (p.ex. poeira, neblina, chuva). Também devemos destacar que antes do processo de fusão é importante alinhar espacialmente os dados dos sensores, isto e calibrar adequadamente os sensores para poder transladar dados fornecidos por um sensor referenciado no próprio sistema de coordenadas para um outro sistema de coordenadas de outro sensor ou para um sistema de coordenadas global. Este projeto foi desenvolvido usando a plataforma CaRINA II desenvolvida junto ao Laboratório LRM do ICMC/USP São Carlos. Por fim, o projeto foi implementado usando o ambiente ROS, OpenCV e PCL, permitindo a realização de experimentos com dados reais de Radar, LIDAR e câmera estéreo, bem como realizando uma avaliação da qualidade da fusão dos dados e detecção de obstáculos comestes sensores. / This masters project aims to research and develop methods and algorithms related to the use of radars, computer vision, calibration and sensor data fusion in autonomous / intelligent vehicles to detect obstacles. The obstacle detection process is divided into three stages, the first one is the reading of Radar, LiDAR signals and the data capture of the stereo camera properly calibrated, the second stage is the fusion of data obtained in the previous stage(Radar + Camera, Radar + 3D LIDAR), the third step is the extraction of characteristics of the information obtained, identifying and differentiating the support plane(ground) of the obstacles, and finally realizing the detection of the obstacles resulting from the fusion of the data. Thus it is possible to differentiate types of elements identified by the Radar and that are confirmed and united to the data obtained by computational vision or LIDAR (point cloud), obtaining amore precise description of the contour, format, size and positioning of these. During the detection task it is important to locate and segment the obstacles to later make decisions regarding the control of the autonomous / intelligent vehicle. It is important to note that Radar operates in adverse conditions (little or no light, with dust or fog), but allows only isolated points representing obstacles (sparse), where on the other hand, the stereo camera and LIDAR 3D allow to define the shapeand size of objects. As for the camera, this is more susceptible to variations in lighting and to environmental and visibility restricted conditions (eg dust, haze, rain). It is important to spatially align the sensor data, calibrating the sensors appropriately, to be able to translate data provided by a sensor referenced in the coordinate system itself to another coordinate system of another sensor or to a global coordinate system. This project was developed using the CaRINA II platform developed by the LRM Laboratory ICMC / USP São Carlos. Finally, the project was implemented using the ROS, OpenCV and PCL environments, allowing experiments with real data from Radar, LIDAR and stereo camera, as well as performing an evaluation of the quality of the data fusion and detection of obstacles with these sensors .
15

Méthodes informées de factorisaton matricielle pour l'étalonnage de réseaux de capteurs mobiles et la cartographie de champs de pollution / Informed method of matrix factorization for calibration of mobile sensor networks and pollution fields mapping

Dorffer, Clément 13 December 2017 (has links)
Le mobile crowdsensing consiste à acquérir des données géolocalisées et datées d'une foule de capteurs mobiles (issus de ou connectés à des smartphones). Dans cette thèse, nous nous intéressons au traitement des données issues du mobile crowdsensing environnemental. En particulier, nous proposons de revisiter le problème d'étalonnage aveugle de capteurs comme un problème informé de factorisation matricielle à données manquantes, où les facteurs contiennent respectivement le modèle d'étalonnage fonction du phénomène physique observé (nous proposons des approches pour des modèles affines et non linéaires) et les paramètres d'étalonnage de chaque capteur. Par ailleurs, dans l'application de surveillance de la qualité de l'air que nous considérons, nous supposons avoir à notre disposition des mesures très précises mais distribuées de manière très parcimonieuse dans le temps et l'espace, que nous couplons aux multiples mesures issues de capteurs mobiles. Nos approches sont dites informées car (i) les facteurs matriciels sont structurés par la nature du problème, (ii) le phénomène observé peut être décomposé sous forme parcimonieuse dans un dictionnaire connu ou approché par un modèle physique/géostatistique, et (iii) nous connaissons la fonction d'étalonnage moyenne des capteurs à étalonner. Les approches proposées sont plus performantes que des méthodes basées sur la complétion de la matrice de données observées ou les techniques multi-sauts de la littérature, basées sur des régressions robustes. Enfin, le formalisme informé de factorisation matricielle nous permet aussi de reconstruire une carte fine du phénomène physique observé. / Mobile crowdsensing aims to acquire geolocated and timestamped data from a crowd of sensors (from or connected to smartphones). In this thesis, we focus on processing data from environmental mobile crowdsensing. In particular, we propose to revisit blind sensor calibration as an informed matrix factorization problem with missing entries, where factor matrices respectively contain the calibration model which is a function of the observed physical phenomenon (we focus on approaches for affine or nonlinear sensor responses) and the calibration parameters of each sensor. Moreover, in the considered air quality monitoring application, we assume to pocee- some precise measurements- which are sparsely distributed in space and time - that we melt with the multiple measurements from the mobile sensors. Our approaches are "informed" because (i) factor matrices are structured by the problem nature, (ii) the physical phenomenon can be decomposed using sparse decomposition with a known dictionary or can be approximated by a physical or a geostatistical model, and (iii) we know the mean calibration function of the sensors to be calibrated. The proposed approaches demonstrate better performances than the one based on the completion of the observed data matrix or the multi-hop calibration method from the literature, based on robust regression. Finally, the informed matrix factorization formalism also provides an accurate reconstruction of the observed physical field.
16

Obtenção em larga escala de transmissores de pressão piezoresistivos de alto desempenho. / Large scale production of industrial piezoresistive pressure transmitters of high performance.

Mayor Herrera, César Augusto 18 November 2013 (has links)
É apresentado um sistema para redução do tempo de calibração e compensação de sensores piezoresistivos de pressão para aplicações de automação e controle com características de alto desempenho, com compensação de diversas fontes de erro, especialmente, compensação de não-linearidade e influência da temperatura na medição, baseado em uma concepção que privilegia a precisão e exatidão da medida, a qualidade e a confiabilidade. O sistema projetado permite a automação do processo de calibração e compensação térmica de transmissores de pressão por meio de um sistema de medição e programação múltiplo, que possibilita a aquisição dos sinais em um único computador para até 16 transmissores simultâneos, aperfeiçoando o tempo total do processo, permitindo efetuar essa operação para uma produção a nível industrial. O tempo total do processo de compensação e calibração de mais de um transmissor foi reduzido em aproximadamente 6 horas por cada transmissor adicional. No caso de 16 transmissores o sistema sem multiplexação demoraria 114 horas enquanto o sistema com multiplexação demora 24 horas para efetuar o processo completo, representando uma diminuição de 78,9% no tempo total do processo. Os transmissores calibrados e compensados usando o sistema de multiplexação apresentam TEB menor que 0,1% FS, mostrando que o sistema projetado permite que os transmissores de pressão produzidos cumpram com características de desempenho iguais às atingidas pelo sistema de compensação e calibração simples. / Is presented a system for time optimization of the calibration and compensation process of piezoresistive pressure sensors for automation and control applications with high performance characteristics, with compensation of several error sources, specially, compensation of temperature dependencies and nonlinearity, based in an approach that privileges accuracy and precision in the measurements, quality and reliability. The designed system enables the automation of the calibration process and temperature compensation of pressure transmitters through a measuring and multiple programming system that enables the acquisition of signals on a single computer for up to 16 simultaneous transmitters, improving the overall process time, allowing to perform this operation for a production of industrial level. The total process time of compensation and calibration of more than one transmitter has been reduced by about 6 hours for each additional transmitter. In the case of 16 transmitters, the system without multiplexing would take 114 hours while the system with multiplexing takes 24 hours to make the entire process, representing a decrease of 78.9% in the total process time. The transmitters calibrated and compensated using the multiplexing system have TEB less than 0.1% FS, showing that the designed system allows pressure transmitters produced comply with performance characteristics equal to those achieved by the original compensation and calibration system.
17

Design and implementation of sensor fusion for the towed synthetic aperture sonar

Meng, Rui Daniel January 2007 (has links)
For synthetic aperture imaging, position and orientation deviation is of great concern. Unknown motions of a Synthetic Aperture Sonar (SAS) can blur the reconstructed images and degrade image quality considerably. Considering the high sensitivity of synthetic aperture imaging technique to sonar deviation, this research aims at providing a thorough navigation solution for a free-towed synthetic aperture sonar (SAS) comprising aspects from the design and construction of the navigation card through to data postprocessing to produce position, velocity, and attitude information of the sonar. The sensor configuration of the designed navigation card is low-cost Micro-Electro-Mechanical-Systems (MEMS) Magnetic, Angular Rate, and Gravity (MARG) sensors including three angular rate gyroscopes, three dual-axial accelerometers, and a triaxial magnetic hybrid. These MARG sensors are mounted orthogonally on a standard 180mm Eurocard PCB to monitor the motions of the sonar in six degrees of freedom. Sensor calibration algorithms are presented for each individual sensor according to its characteristics to precisely determine sensor parameters. The nonlinear least square method and two-step estimator are particularly used for the calibration of accelerometers and magnetometers. A quaternion-based extended Kalman filter is developed based on a total state space model to fuse the calibrated navigation data. In the model, the frame transformations are described using quaternions instead of other attitude representations. The simulations and experimental results are demonstrated in this thesis to verify the capability of the sensor fusion strategy.
18

Vision-Based Navigation for a Small Fixed-Wing Airplane in Urban Environment

Hwangbo, Myung 01 May 2012 (has links)
An urban operation of unmanned aerial vehicles (UAVs) demands a high level of autonomy for tasks presented in a cluttered environment. While fixed-wing UAVs are well suited for long-endurance missions at a high altitude, enabling them to navigate inside an urban area brings another level of challenges. Their inability to hover and low agility in motion cause more difficulties on finding a feasible path to move safely in a compact region, and the limited payload allows only low-grade sensors for state estimation and control. We address the problem of achieving vision-based autonomous navigation for a small fixed-wing in an urban area with contributions to the following several key topics. Firstly, for robust attitude estimation during dynamic maneuvering, we take advantage of the line regularity in an urban scene, which features vertical and horizontal edges of man-made structures. The sensor fusion with gravity-related line segments and gyroscopes in a Kalman filter can provide driftless and realtime attitude for ight stabilization. Secondly, as a prerequisite to sensor fusion, we present a convenient self-calibration scheme based on the factorization method. Natural references such as gravity, vertical edges, and distant scene points, available in urban fields, are sufficient to find intrinsic and extrinsic parameters of inertial and vision sensors. Lastly, to generate a dynamically feasible motion plan, we propose a discrete planning method that encodes a path into interconnections of finite trim states, which allow a significant dimension reduction of a search space and result in naturally implementable paths integrated with ight controllers. The most probable path to reach a target is computed by the Markov Decision Process with motion uncertainty due to wind, and a minimum target observation time is imposed on the final motion plan to consider a camera's limited field-of-view. In this thesis, the effectiveness of our vision-based navigation system is demonstrated by what we call an "air slalom" task in which the UAV must autonomously search and localize multiple gates, and pass through them sequentially. Experiment results with a 1m wing-span airplane show essential navigation capabilities demanded in urban operations such as maneuvering passageways between buildings.
19

Detecção de obstáculos usando fusão de dados de percepção 3D e radar em veículos automotivos / Obstacle detection using 3D perception and radar data fusion in automotive vehicles

Luis Alberto Rosero Rosero 30 January 2017 (has links)
Este projeto de mestrado visa a pesquisa e o desenvolvimento de métodos e algoritmos, relacionados ao uso de radares, visão computacional, calibração e fusão de sensores em veículos autônomos/inteligentes para fazer a detecção de obstáculos. O processo de detecção de obstáculos se divide em três etapas, a primeira é a leitura de sinais de Radar, do LiDAR e a captura de dados da câmera estéreo devidamente calibrados, a segunda etapa é a fusão de dados obtidos na etapa anterior (Radar+câmera, Radar+LIDAR 3D), a terceira etapa é a extração de características das informações obtidas, identificando e diferenciando o plano de suporte (chão) dos obstáculos, e finalmente realizando a detecção dos obstáculos resultantes da fusão dos dados. Assim é possível diferenciar os diversos tipos de elementos identificados pelo Radar e que são confirmados e unidos aos dados obtidos por visão computacional ou LIDAR (nuvens de pontos), obtendo uma descrição mais precisa do contorno, formato, tamanho e posicionamento destes. Na tarefa de detecção é importante localizar e segmentar os obstáculos para posteriormente tomar decisões referentes ao controle do veículo autônomo/inteligente. É importante destacar que o Radar opera em condições adversas (pouca ou nenhuma iluminação, com poeira ou neblina), porém permite obter apenas pontos isolados representando os obstáculos (esparsos). Por outro lado, a câmera estéreo e o LIDAR 3D permitem definir os contornos dos objetos representando mais adequadamente seu volume, porém no caso da câmera esta é mais suscetível a variações na iluminação e a condições restritas ambientais e de visibilidade (p.ex. poeira, neblina, chuva). Também devemos destacar que antes do processo de fusão é importante alinhar espacialmente os dados dos sensores, isto e calibrar adequadamente os sensores para poder transladar dados fornecidos por um sensor referenciado no próprio sistema de coordenadas para um outro sistema de coordenadas de outro sensor ou para um sistema de coordenadas global. Este projeto foi desenvolvido usando a plataforma CaRINA II desenvolvida junto ao Laboratório LRM do ICMC/USP São Carlos. Por fim, o projeto foi implementado usando o ambiente ROS, OpenCV e PCL, permitindo a realização de experimentos com dados reais de Radar, LIDAR e câmera estéreo, bem como realizando uma avaliação da qualidade da fusão dos dados e detecção de obstáculos comestes sensores. / This masters project aims to research and develop methods and algorithms related to the use of radars, computer vision, calibration and sensor data fusion in autonomous / intelligent vehicles to detect obstacles. The obstacle detection process is divided into three stages, the first one is the reading of Radar, LiDAR signals and the data capture of the stereo camera properly calibrated, the second stage is the fusion of data obtained in the previous stage(Radar + Camera, Radar + 3D LIDAR), the third step is the extraction of characteristics of the information obtained, identifying and differentiating the support plane(ground) of the obstacles, and finally realizing the detection of the obstacles resulting from the fusion of the data. Thus it is possible to differentiate types of elements identified by the Radar and that are confirmed and united to the data obtained by computational vision or LIDAR (point cloud), obtaining amore precise description of the contour, format, size and positioning of these. During the detection task it is important to locate and segment the obstacles to later make decisions regarding the control of the autonomous / intelligent vehicle. It is important to note that Radar operates in adverse conditions (little or no light, with dust or fog), but allows only isolated points representing obstacles (sparse), where on the other hand, the stereo camera and LIDAR 3D allow to define the shapeand size of objects. As for the camera, this is more susceptible to variations in lighting and to environmental and visibility restricted conditions (eg dust, haze, rain). It is important to spatially align the sensor data, calibrating the sensors appropriately, to be able to translate data provided by a sensor referenced in the coordinate system itself to another coordinate system of another sensor or to a global coordinate system. This project was developed using the CaRINA II platform developed by the LRM Laboratory ICMC / USP São Carlos. Finally, the project was implemented using the ROS, OpenCV and PCL environments, allowing experiments with real data from Radar, LIDAR and stereo camera, as well as performing an evaluation of the quality of the data fusion and detection of obstacles with these sensors .
20

Obtenção em larga escala de transmissores de pressão piezoresistivos de alto desempenho. / Large scale production of industrial piezoresistive pressure transmitters of high performance.

César Augusto Mayor Herrera 18 November 2013 (has links)
É apresentado um sistema para redução do tempo de calibração e compensação de sensores piezoresistivos de pressão para aplicações de automação e controle com características de alto desempenho, com compensação de diversas fontes de erro, especialmente, compensação de não-linearidade e influência da temperatura na medição, baseado em uma concepção que privilegia a precisão e exatidão da medida, a qualidade e a confiabilidade. O sistema projetado permite a automação do processo de calibração e compensação térmica de transmissores de pressão por meio de um sistema de medição e programação múltiplo, que possibilita a aquisição dos sinais em um único computador para até 16 transmissores simultâneos, aperfeiçoando o tempo total do processo, permitindo efetuar essa operação para uma produção a nível industrial. O tempo total do processo de compensação e calibração de mais de um transmissor foi reduzido em aproximadamente 6 horas por cada transmissor adicional. No caso de 16 transmissores o sistema sem multiplexação demoraria 114 horas enquanto o sistema com multiplexação demora 24 horas para efetuar o processo completo, representando uma diminuição de 78,9% no tempo total do processo. Os transmissores calibrados e compensados usando o sistema de multiplexação apresentam TEB menor que 0,1% FS, mostrando que o sistema projetado permite que os transmissores de pressão produzidos cumpram com características de desempenho iguais às atingidas pelo sistema de compensação e calibração simples. / Is presented a system for time optimization of the calibration and compensation process of piezoresistive pressure sensors for automation and control applications with high performance characteristics, with compensation of several error sources, specially, compensation of temperature dependencies and nonlinearity, based in an approach that privileges accuracy and precision in the measurements, quality and reliability. The designed system enables the automation of the calibration process and temperature compensation of pressure transmitters through a measuring and multiple programming system that enables the acquisition of signals on a single computer for up to 16 simultaneous transmitters, improving the overall process time, allowing to perform this operation for a production of industrial level. The total process time of compensation and calibration of more than one transmitter has been reduced by about 6 hours for each additional transmitter. In the case of 16 transmitters, the system without multiplexing would take 114 hours while the system with multiplexing takes 24 hours to make the entire process, representing a decrease of 78.9% in the total process time. The transmitters calibrated and compensated using the multiplexing system have TEB less than 0.1% FS, showing that the designed system allows pressure transmitters produced comply with performance characteristics equal to those achieved by the original compensation and calibration system.

Page generated in 0.3042 seconds