491 |
LabVIEW™ Database Interfacing For Robotic ControlGebregziabher, Netsanet 26 July 2006 (has links)
Submitted to the faculty of the School of Informatics in partial fulfillment of the requirements for the degree Master of Science in Chemical Informatics (Laboratory Informatics Specialization)Indiana University May 2006 / The Zymark™ System is a lab automation workstation that uses the Caliper Life Sciences (Hopkinton, MA) Zymate XP robot. At Indiana University-Purdue University Indianapolis, a Zymate is used in a course, INFO I510 Data Acquisition and Laboratory Automation, to demonstrate the fundamentals of laboratory robotics. This robot has been re-engineered to function with National Instruments™ graphical software program LabVIEW™. LabVIEW is an excellent tool for robotic control. Based on changing conditions, it is able to dynamically use data from any source to modify the operating parameters of a robot. For dynamically changing information, storage of that information must be readily accessible. For example, there is a need to continuously store and update the calibration data of the robot, populate the setting of each axis and positioning inside the workplace, and also store robot positioning information. This can be achieved by using a database which allows for robotic control data to be easily searched and accessed. To address this need, an interface was developed which would allow full, dynamic communication between any LabVIEW program (called “virtual instruments,” or VIs) and the database. This has been accomplished by developing a set of subVIs that can be dropped into the calling robotic control VIs. With these subVIs, a user has the ability to create table and column information, delete a table, retrieve table information by clicking a particular table name on the user interface, or query using any SQL-specific combination of columns or tables within the database. For robot functionality, subVIs were created to store and retrieve data such as calibration data points and regression calculations. / Chemical Informatics
|
492 |
Calibration of Hot-Film X-Probes for High Accuracy Angle Alignment in Wind TunnelsJackson, Dallin L. 01 August 2019 (has links)
This thesis investigates the use of hot-film thermal anemometers to align a plate on a wind tunnel at Hill Air Force Base that is used to calibrate Angle of Attack Transmitters on F-16s. A reoccuring problem with this wind tunnel is that no two instruments can verify an angle reading of the the mounting plate for the Angle of Attack Transmitters to the air stream in the wind tunnel. Multiple thermal anemometer calibration methods, such as Jorgensen’s equation and a look-up table are implemented to attemp to achieve consistent measurements between multiple probes. The results show that it is neccessary to have conditions match between calibration and measurement when attempting to achieve high accuracy with angle measurements.
|
493 |
Traceable Imaging Spectrometer Calibration and Transformation of Geometric and Spectral Pixel PropertiesBaumgartner, Andreas 07 February 2022 (has links)
Over the past several decades, push-broom imaging spectrometers have become a common Earth observation tool. Instruments of this type must be calibrated to convert the raw sensor data into units of spectral radiance. Calibration is in this case a two-step process: First, a sensor model is obtained by performing calibration measurements, which is then used to convert raw signals to spectral radiance data. Further processing steps can be performed to correct for optical image distortions. In this work, we show the complete calibration process for push-broom imaging spectrometers, including uncertainty propagation. Although the focus is specifically on calibrating a HySpex VNIR-1600 airborne-imaging spectrometer, all methods can be adapted for other instruments. We discuss the theory of push-broom imaging spectrometers by introducing a generic sensor model, which includes the main parameters and effects of such instruments. Calibrating detector-related effects, such as dark signal, the noise as a function of the signal, and temperature effects is shown. Correcting temperature effects significantly reduces measurement errors. To determine the signal non-linearity, we built a setup based on the light-addition method and improved this method to allow smaller signal level distances of the sampling points of the non-linearity curve. In addition, we investigate the non-linearity of the integration time. The signal (<=15%) and the integration time (<=0.5%) non-linearities can be corrected with negligible errors. After correcting both non-linearity effects, a smearing effect is revealed, which is investigated in detail. We use a collimator and monochromator setup for calibrating the geometric and spectral parameters, respectively. To accurately model the angular and spectral response functions, we propose using cubic splines, which leads to significant improvements compared to previously used Gaussian functions. We present a new method that allows interpolation of the cubic spline based response functions for pixels not measured. The results show that the spectral and geometric properties are non-uniform and change rapidly within a few pixels. The absolute radiometric calibration is performed with a lamp-plaque setup and an integrating sphere is used for flat-fielding. To mitigate the influence of sphere non-uniformities, we rotate the instrument along the across-track angle to measure the same spot of the sphere with each pixel. We investigate potential systematic errors and use Monte Carlo simulations to determine the uncertainties of the radiometric calibration. In addition, we measure the polarization sensitivity with a wire-grid polarizer. Finally, we propose a novel image transformation method that allows manipulation of geometric and spectral properties of each pixel individually. Image distortions can be corrected by changing a pixel's center angles, center wavelength, and response function shape. This is done by using a transformation matrix that maps each pixel of a target sensor B to the pixels of a source sensor A. This matrix is derived from two cross-correlation matrices: Sensor A and itself, and sensor B and sensor A. We provide the mathematical background and discuss the propagation of uncertainty. A case study shows that the method can significantly improve data quality.
|
494 |
Propuesta de mejora de los procesos calibración y ventas en una empresa comercializadora de equipos de medición industrial / Improvement proposal for Calibration Process and Sales Process for a Trading Company of Industrial Measurement ToolsMoloche Huamán, Karen Fabiola 12 February 2020 (has links)
En la presente propuesta de mejora de procesos en Corporación ZAMTSU S.R.L., que es una empresa dedicada a la comercialización de instrumentos de medición industriales para el sector minero, energético y petrolero. Para la obtención de información, se realizaron entrevistas a los responsables de los procesos de venta y calibración, además se analizó una serie de registros, documentos, archivos, formatos y bases de datos, los cuales permitieron realizar un adecuado diagnóstico para establecer los problemas, sus causas y posibles soluciones. Una vez procesada la información, se llegó a la conclusión que, ZAMTSU no tenía un adecuado tratamiento para sus procesos, descuidando algunos aspectos de alta relevancia. Por otro lado, el 97% de los problemas encontrados se distribuye entre la falta de planificación en aprovisionamiento de instrumentos de medición industrial y retrasos en la calibración no conforme. Es por ello, que cualquier iniciativa o propuesta de mejora debe ir enfocada a los procesos que involucran estos problemas. El impacto económico por los procesos de calibración y ventas representa 72,412 nuevos soles de pérdida acumulada en el periodo 2013-2016, lo cual argumentó la elaboración de propuesta de mejora que se expondrá en los siguientes apartados. / For the present a proposal for process improvement for Corporation ZAMTSU S.R.L., which is a company dedicated to trade industrial measurement tools for mining, energy and oil sector. In order to obtain information, interviews were carried out with those responsible for the sales and calibration processes. In addition, a series of records, documents, files, formats and databases were analyzed, which allowed for an adequate diagnosis to establish the problems, their causes and possible solutions. Once the information was processed, it was concluded that, ZAMTSU did not have an adequate treatment for its processes, neglecting some aspects of high relevance.
On the other hand, 97% of the problems found are distributed between the lack of planning in the provision of industrial measurement instruments and delays in non-conforming calibration. That is why, any initiative or proposal for improvement should be focused on the processes that involve these problems. The economic impact of the calibration and sales processes represents S/. 72,412 of accumulated loss in the period 2013-2016, which argued the elaboration of a proposal for improvement that will be discussed in the following sections. / Tesis
|
495 |
A Novel Catadioptric Ray-Pixel Camera Model and its Application to 3D Reconstruction / 反射屈折撮像系の新たなカメラモデルと3次元形状復元への応用Kawahara, Ryo 25 March 2019 (has links)
京都大学 / 0048 / 新制・課程博士 / 博士(情報学) / 甲第21910号 / 情博第693号 / 新制||情||119(附属図書館) / 京都大学大学院情報学研究科知能情報学専攻 / (主査)講師 延原 章平, 教授 西野 恒, 准教授 飯山 将晃 / 学位規則第4条第1項該当 / Doctor of Informatics / Kyoto University / DFAM
|
496 |
Camera Calibration for Zone Positioning and 2D-SLAM : Autonomous Warehouse Solutions for Toyota Material HandlingBolgakov, Benjamin, Frank, Anton January 2023 (has links)
The aim of this thesis is to investigate how well a generic monocular camera, placed on the vehicle, can be employed to localize an autonomous vehicle in a warehouse setting. The main function is to ascertain which zone the vehicle is currently in, as well as update the status when entering a new zone. Two zones are defined, where one has a lower allowed top velocity and the other a higher one. For this purpose ArUco markers are used to signal the system as to where it currently is. Markers are strategically placed around the laboratory area to saturate the environment with possible detections. Multiple sequences are recorded while varying camera placement, angles, and paths to determine the optimal number and placement of markers. In addition to this, a SLAM solution is tested in order to explore what benefits can be found. The idea is to provide fine-grained localization as well as a map of the warehouse environment, to provide more options for further development. To solve the SLAM problem, an implemented particle filter approach initializes a set of particles uniformly distributed within the world frame. For each frame, the particles undergo pose prediction, weight assignment based on likelihood, and resampling. This iterative process gradually converges the particles toward the camera's true position. Visual odometry techniques are used to estimate the camera's ego-motion. The process involves acquiring a sequence of images, detecting distinctive features, matching features between consecutive frames, estimating camera motion, and optionally applying local optimization techniques for further refinement. The implementation shows promise and all test cases performed during the project have been successful as for the zone localization. The SLAM solution can detect and track specific features or landmarks over consecutive frames. By triangulating the positions of these features, their depth and distance can be determined. However, the visualization of these features on a top-down map, which was part of the plan, has not been completed yet despite finishing the particle filter implementation.
|
497 |
Investigation and calibration of pulsed time-of-flight terrestrial laser scannersReshetyuk, Yuriy January 2006 (has links)
This thesis has two aims. The first one is the investigation and analysis of the errors occurring in the measurements with pulsed time-of-flight (TOF) terrestrial laser scanners (TLS). A good understanding of the error sources and the relationships between them is necessary to secure the data accuracy. We subdivide these errors into four groups: instrumental, object-related, environmental and methodological. Based on our studies and the results obtained by other researchers, we have compiled an error model for TLS, which is used to estimate the single-point coordinate accuracy of a point in the point cloud, transformed to the specified coordinate system. The second aim is to investigate systematic instrumental errors and performance of three pulsed TOF laser scanners – Callidus 1.1, Leica HDS 3000 and Leica HDS 2500 – and to develop calibration procedures that can be applied by the users to determine and correct the systematic errors in these instruments. The investigations have been performed at the indoor 3D calibration field established at KTH and outdoors. The systematic instrumental errors, or calibration parameters, have been estimated in a self-calibration according to the parametric least-squares adjustment in MATLAB®. The initial assumption was that the scanner instrumental errors are similar to those in a total station. The results have shown that the total station error model is applicable for TLS as a first approximation, but additional errors, specific to the scanner design, may appear. For example, we revealed a significant vertical scale error in the scanner Callidus 1.1, caused by the faults of the angular position sensor. The coordinate precision and accuracy of the scanners, estimated during the self-calibration, is at the level of several millimetres for Callidus 1.1 and Leica HDS 3000, and at the submillimetre level for Leica HDS 2500. In other investigations, we revealed a range drift of up to 3 mm during the first few hours of scanning, presumably due to the changes in the temperature inside the scanners. The angular precision depends on the scanner design (“panoramic” or “camera-like”), and the angular accuracy depends on the significant calibration parameters in the scanner. Investigations of the influence of surface reflectance on the range measurements have shown that the indoor illumination and surface wetness have no tangible influence on the results. The type of the material does not affect, in general, the ranging precision for Callidus 1.1, but it affects the ranging precision and accuracy of the scanners Leica HDS 3000 and Leica HDS 2500. The reason may be different wavelength and, possibly, different design of the electronics in the laser rangefinders. Materials with high reflectance and those painted with bright “warning” colours may introduce significant offsets into the measured ranges (5 – 15 cm), when scanned from close ranges at normal incidence with the scanner Leica HDS 3000. “Mixed pixels” at the object edge may introduce a range error of several centimetres, on the average, depending on the type of the material. This phenomenon leads also to the distortions of the object size, which may be reduced by the removal of the “mixed pixels” based on their intensity. The laser beam intensity recorded by the scanner tends to decrease with an increased incidence angle, although not as assumed by the popular Lambertian reflectance model. Investigations of the scanner Leica HDS 2500 outdoors have revealed no significant influence of the “normal” atmospheric conditions on the range measurements at the ranges of up to 50 m. Finally, we have developed and tested two simple procedures for the calibration of the vertical scale (and vertical index) error and zero error in laser scanners. We have also proposed an approach for the evaluation of the coordinate precision and accuracy in TLS based on the experiences from airborne laser scanning (ALS). / QC 20101123
|
498 |
Model applications on nitrogen and microplastic removal in novel wastewater treatmentElsayed, Ahmed January 2021 (has links)
Excessive release of nitrogen (e.g., ammonia and organic nitrogen) into natural water systems can cause serious environmental problems such as algal blooms and eutrophication in lakes and rivers, threating the aquatic life and ecosystem balance. Membrane aerated biofilm reactor (MABR) and anaerobic ammonia oxidation (Anammox) are new technologies for wastewater treatment with an emphasis on energy-efficient nitrification and denitrification. Microplastic (MP) is an emerging contaminant in wastewater and sludge treatment that has a negative effect on the environment and public health. For these relatively new technologies and contaminants, mathematical models can enhance our understanding of the removal mechanisms, such as reaction kinetics and mass transport. In this study, mathematical models were developed and utilized to simulate the removal of nitrogen and MP in biological reactions in wastewater treatment processes. Firstly, a comprehensive MABR model was developed and calibrated using a pilot-scale MABR operation data to estimate the important process parameters where it was found that biofilm thickness, liquid film thickness and C/N ratio are key parameters on nitrification and denitrification. Secondly, a mathematical model for Anammox process was developed and calibrated using previous experimental results to simulate the wastewater treatment using Anammox process, reflecting the importance of dissolved oxygen on the nitrogen removal using Anammox bacteria. Thirdly, a granule-based Anammox mathematical model was built and calibrated using other simulation results from previous Anammox studies, showing the significance of operational conditions (e.g., granule diameter and dissolved oxygen) on the success of Anammox enrichment process. Fourthly, an enzyme kinetic mathematical model was constructed and calibrated with lab-scale experiments to simulate the MP reduction using hydrolytic enzymes under various experimental conditions where it was found that anaerobic digesters can be an innovative solution for MP removal during the wastewater treatment processes. Based on the main findings in this study, it can be concluded that mathematical models calibrated with various experimental results are efficient tools for determining the important operational parameters on the nitrogen and MP removal and helping in the design and operation of large-scale removal applications. / Thesis / Doctor of Philosophy (PhD) / Nitrogen and microplastic (MP) are serious contaminants in wastewater that can cause critical environmental and public health problems. Nitrogen can cause algal blooms, threatening the aquatic ecosystem while MP can be ingested by the biota (e.g., fish and seabirds), causing serious damage in the food chain. Nitrogen removal in the conventional biological wastewater treatment is relatively expensive, requiring high energy cost and large footprint for the wastewater treatment facilities. MP removal is also difficult in the conventional wastewater and sludge treatment processes. Therefore, new technologies, including membrane aerated biofilm reactor (MABR), anaerobic ammonia oxidation (Anammox) and hydrolytic enzymes processes, are implemented to improve the nitrogen and MP removal with a reduced energy and resources consumption in wastewater and sludge treatment processes. Numerical models are considered as an efficient tool for better understanding of these novel technologies and the competitive biological reaction in these technologies coupled with accurate estimation of process rates of the reactions. In this thesis, different numerical models were developed and calibrated to estimate the important model parameters, assess the effect of operational conditions on the removal mechanisms and determine the dominant parameters on the removal of nitrogen and MP in the wastewater treatment processes. These numerical models can be used for better understanding of the removal mechanisms of nitrogen and MP, helping in the design and operation of removal systems and addressing novel technologies in large-scale nitrogen and MP removal applications.
|
499 |
On-Chip True Random Number Generation in Nanometer CmosSuresh, Vikram Belur 01 January 2012 (has links) (PDF)
On-chip True Random Number Generator (TRNG) forms an integral part of a number of cryptographic systems in multi-core processors, communication networks and RFID. TRNG provides random keys, device id and seed for Pseudo Random Number Generators (PRNG). These circuits, harnessing physical random variations like thermal noise or stray electromagnetic waves are ideally expected to generate random bits with very high entropy and zero correlation. But, progression to advance semiconductor manufacturing processes has brought about various challenges in the design of TRNG. Increasing variations in the fabrication process and the sensitivity of transistors to operating conditions like temperature and supply voltage have significant effect on the efficiency of TRNG designed in sub-micron technologies. Poorly designed random number generators also provide an avenue for attackers to break the security of a cryptographic system. Process variation and operating conditions may be used as effective tools of attack against TRNG. This work makes a comprehensive study of the effect of process variation on metastability-based TRNG designed in deep sub-micron technology. Furthermore, the effect of operating temperature and the supply voltage on the performance of TRNG is also analyzed. To mitigate these issues we study entropy extraction mechanisms based both on algorithmic approach and circuit tuning and compare these techniques based on their tolerance to process variation and the energy overhead for correction. We combine the two v approaches to efficiently perform self-calibration, using a hybrid of algorithmic correction and circuit tuning to compensate the effect of variations. The proposed technique provides a fair trade-off between the degree of entropy extraction and the overhead in terms of area and energy, introducing minimal correlation in the output of the TRNG. Besides the study of the effect of process variation and operating conditions on the TRNG, we also propose to study the possible attack models on a TRNG. Finally, we propose a probabilistic approach to design and analysis of TRNG using a stochastic model of the circuit operation and incorporating the random source in thermal noise. All analysis is done for 45nm technology using the NCSU PDK transistor models. The simulation platform is developed using HSPICE and a Perl based automation flow.
|
500 |
Monitoring, Modeling and Implementation of Best Management Practices to ReduceNutrient Loadings in the Atwood and Tappan Lake Watersheds in Tuscarawas Basin, OhioBijukshe, Shuvra 19 July 2023 (has links)
No description available.
|
Page generated in 0.2253 seconds