461 |
Geometrical error calibration in reflective surface testing based on reverse Hartmann testWang, Daodang, Gong, Zhidong, Xu, Ping, Liang, Rongguang, Kong, Ming, Zhao, Jun, Wang, Chao, Mo, Linhai, Mo, Shuhui 23 August 2017 (has links)
In the fringe-illumination deflectometry based on reverse-Hartmann-test configuration, ray tracing of the modeled testing system is performed to reconstruct the test surface error. Careful calibration of system geometry is required to achieve high testing accuracy. To realize the high-precision surface testing with reverse Hartmann test, a computer-aided geometrical error calibration method is proposed. The aberrations corresponding to various geometrical errors are studied. With the aberration weights for various geometrical errors, the computer-aided optimization of system geometry with iterative ray tracing is carried out to calibration the geometrical error, and the accuracy in the order of sub-nanometer is achieved.
|
462 |
Automotive Engine Calibration with Experiment-Based Evolutionary Multi-objective Optimization / 実験ベース進化的多目的最適化による自動車用エンジンの適合 / ジッケン ベース シンカテキ タモクテキ サイテキカ ニ ヨル ジドウシャヨウ エンジン ノ テキゴウKaji, Hirotaka 24 September 2008 (has links)
The aim of this thesis is establishment of an overall framework of a novel control parameter optimization of automotive engine. Today, control parameters of an automotive engine have to be adjusted adequately and simultaneously to achieve plural criteria such as environmental emissions, fuel-consumption and engine torque. This process is called 'engine calibration'. Because many electronic control devices have been adopted for engine to satisfy these objectives, the complexity of engine calibration is increasing year to year. Recent progress in automatic control and instrumentation provides a smart environment called Hardware In the Loop Simulation (HILS) for engine calibration. In addition, Response Surface Methodology (RSM) based on statistical model is currently employed as the optimization method. Nevertheless, this approach is complicated by adequate model selection, precise model construction, and close model validation to confirm the precision of the model output. To cope with these problems, we noticed experiment-based optimization via HILS environment based on Multi-objective Evolutionary Algorithms (MOEAs), that is expected to be a powerful optimization framework for real world problems such as engineering design, as another automatic calibration approach. In experiment-based optimization, the parameters of a real system are optimized directly by optimization techniques in real time through experimentation. In this thesis, this approach is called Experiment-Based Evolutionary Multi-objective Optimization (EBEMO) and it is proposed as a novel automatic engine calibration technique. This approach can release us from burdens of model selection, construction, and validation. When using this technique, calibration can be done immediately after specifications have been changed after optimization. Hence, EBEMO promises to be an effective approach to automatic engine calibration. However, since conventional MOEAs face several difficulties, it is not easy to apply it to real engines. On the one hand, deterioration factors of the search performance of MOEAs in real environments have to be considered. For example, the observation noise of sensors included in output interferes with convergence of MOEAs. In addition, transient response by parameter switching also has similar harmful effects. Moreover, the periodicity of control inputs increase the complexity of the problems. On the other hand, the search time of MOEAs in real environments has to reduce because MOEAs require a tremendous number of evaluations. While we can obtain many measurements with HILS, severe limitations in the number of fitness evaluations still exist because the real experiments need real-time evaluations. Therefore, it is difficult to obtain a set of Pareto optimal solutions in practical time with conventional MOEAs. Additionally, plural MOPs defined by plural operating conditions of map-based controllers has to be optimized. In this thesis, to overcome the difficulties and to make EBEMO using the HILS environment feasible, five techniques are proposed. Each technique is developed through problem formulation, and their effectiveness are confirmed via numerical and real engine experiments. First, observation noise handling technique for MOEAs is considered. Because observation noise deteriorates the search ability of MOEAs, a memorybased fitness estimation method to exclude observation noise is introduced. Then, a crossover operator for periodic functions is proposed. Periodicity exists in engineering problems and leads to harmful effects on the performance of evolutionary algorithms. Moreover, the influence of transient response caused by parameter switching for dynamical systems is considered. In order to solve this problem, a solver of traveling salesman problems is used to determine the evaluation order of individuals. In addition, Pre-selection as acceleration method of MOEAs is proposed. In this technique, the generated offspring are pre-evaluated in the approximation model made by the search history, and then the promising offspring are evaluated in a real environment. Finally, parameterization of multi-objective optimization problems is considered. In engine calibration for maps, optimal control parameters have to be obtained at each operating condition such as engine speed and torque. This problem can be formulated in a form that needs to solve all of the plural multi-objective optimization problems defined by plural conditional variables. To solve this problem effectively, an interpolative initialization method is proposed. Through the real engine experiments, it was confirmed that EBEMO can achieve a practical search accuracy and time by using proposed techniques. In conclusion, the contribution of EBEMO for engine calibration is discussed. Additionally, the directions for future work are outlined. / Kyoto University (京都大学) / 0048 / 新制・課程博士 / 博士(情報学) / 甲第14187号 / 情博第320号 / 新制||情||61(附属図書館) / 26493 / UT51-2008-N504 / 京都大学大学院情報学研究科社会情報学専攻 / (主査)教授 喜多 一, 教授 酒井 徹朗, 教授 片井 修 / 学位規則第4条第1項該当
|
463 |
Application of de-embedding methods to microwave circuitsSwiatko, Adam January 2013 (has links)
In many instances the properties of a network are obstructed by an intervening network,
which is required when performing measurements of the network. These intervening
networks are often in the form of a mode transformer and are, in the general sense, referred
to as error networks.
A new analysis mechanism is developed by applying a de-embedding method that was
identified as being robust. The analysis was subsequently implemented in a numerical
computational software package. The analysis mechanism can then be applied to perform
the characterisation of error networks. The performance of the analysis mechanism is
verified using an ideal lumped-element network. The limitations of the mechanism are
identified and possible ways of addressing these limitations are given. The mechanism is
successfully applied to the characterisation of three different microwave networks. / Dissertation (MEng)--University of Pretoria, 2013. / gm2014 / Electrical, Electronic and Computer Engineering / unrestricted
|
464 |
Internal balance calibration and uncertainty estimation using Monte Carlo simulationBidgood, Peter Mark 18 March 2014 (has links)
D.Ing. (Mechanical Engineering) / The most common data sought during a wind tunnel test program are the forces and moments acting on an airframe, (or any other test article). The most common source of this data is the internal strain gauge balance. Balances are six degree of freedom force transducers that are required to be of small size and of high strength and stiffness. They are required to deliver the highest possible levels of accuracy and reliability. There is a focus in both the USA and in Europe to improve the performance of balances through collaborative research. This effort is aimed at materials, design, sensors, electronics calibration systems and calibration analysis methods. Recent developments in the use of statistical methods, including modern design of experiments, have resulted in improved balance calibration models. Research focus on the calibration of six component balances has moved to the determination of the uncertainty of measurements obtained in the wind tunnel. The application of conventional statistically-based approaches to the determination of the uncertainty of a balance measurement is proving problematical, and to some extent an impasse has been reached. The impasse is caused by the rapid expansion of the problem size when standard uncertainty determination approaches are used in a six-degree of freedom system that includes multiple least squares regression and iterative matrix solutions. This thesis describes how the uncertainty of loads reported by a six component balance can be obtained by applying a direct simulation of the end-to-end data flow of a balance, from calibration through to installation, using a Monte Carlo Simulation. It is postulated that knowledge of the error propagated into the test environment through the balance will influence the choice of calibration model, and that an improved model, compared to that determined by statistical methods without this knowledge, will be obtained. Statistical approaches to the determination of a balance calibration model are driven by obtaining the best curve-fit statistics possible. This is done by adding as many coefficients to the modelling polynomial as can be statistically defended. This thesis shows that the propagated error will significantly influence the choice of polynomial coefficients. In order to do this a Performance Weighted Efficiency (PWE) parameter is defined. The PWE is a combination of the curve-fit statistic, (the back calculated error for the chosen polynomial), a value representing the overall prediction interval for the model(CI_rand), and a value representing the overall total propagated uncertainty of loads reported by the installed balance...
|
465 |
Measuring and modelling the energy demand reduction potential of using zonal space heating control in a UK homeBeizaee, Arash January 2016 (has links)
Most existing houses in the UK have a single thermostat, a timer and conventional thermostatic radiator valves to control the low pressure, hot water space heating system. A number of companies are now offering a solution for room-by-room temperature and time control in such older houses. These systems comprise of motorised radiator valves with inbuilt thermostats and time control. There is currently no evidence of any rigorous scientific study to support the energy saving claims of these zonal control systems. This thesis quantifies the potential savings of zonal control for a typical UK home. There were three components to the research. Firstly, full-scale experiments were undertaken in a matched pair of instrumented, three bedroom, un-furbished, 1930s, test houses that included equipment to replicate the impacts of an occupant family. Secondly, a dynamic thermal model of the same houses, with the same occupancy pattern, that was calibrated against the measured results. Thirdly, the experimental and model results were assessed to explore how the energy savings might vary in different UK climates or in houses with different levels of insulation. The results of the experiments indicated that over an 8-week winter period, the house with zonal control used 12% less gas for space heating compared with a conventionally controlled system. This was despite the zonal control system resulting in a 2 percentage point lower boiler efficiency. A calibrated dynamic thermal model was able to predict the energy use, indoor air temperatures and energy savings to a reasonable level of accuracy. Wider scale evaluation showed that the annual gas savings for similar houses in different regions of the UK would be between 10 and 14% but the energy savings in better insulated homes would be lower.
|
466 |
Calibration of a NaI (Tl) detector for low level counting of naturally occurring radionuclides in soilNoncolela, Sive Professor January 2011 (has links)
>Magister Scientiae - MSc / The Physics Department at the University of the Western Cape and the Environmental Physics group at iThemba labs have been conducting radiometric studies on both land and water. In this study a 7.5 cm X 7.5 cm NaI (Tl) detector was used to study activity concentrations of primordial radionuclides in soil and sand samples. The detector and the sample were placed inside a lead castle to reduce background in the laboratory from the surroundings such as the wall and the floor. The samples were placed inside a 1 L Marinelli beaker which surrounds the detector for better relative efficiency as almost the whole sample is exposed to the detector. Additional lead bricks were placed below the detector to further reduce the background by 20%. The NaI detector is known to be prone to spectral drift caused by temperature differences inside and around the detector. The spectral drift was investigated by using a ¹³⁷Cs source to monitor the movements in the 662 keV peak. The maximum centroid shift was about 4 keV (for a period of 24 hours) which is enough to cause disturbances in spectral fitting. There was no correlation between the centroid shift and small room temperature fluctuations of 1.56 ºC. A Full Spectrum Analysis (FSA) method was used to extract the activity concentrations of ²³⁸U, ²³²Th and ⁴⁰K from the measured data. The FSA method is different from the usual Windows Analysis (WA) as it uses the whole spectrum instead of only putting a ‘window’ around the region of interest to measure the counts around a certain energy peak. The FSA method uses standard spectra corresponding to the radionuclides being investigated, and is expected to have an advantage when low-activity samples are measured. The standard spectra are multiplied by the activity concentrations and then added to fit the measured spectrum. Accurate concentrations are then extracted using a chi-squared (χ²) minimization procedure. Eight samples were measured in the laboratory using the NaI detector and analyzed using the FSA method. The samples were measured for about 24 hours for good statistics. Microsoft Excel and MATLAB were used to calculate the activity concentrations. The ²³⁸U activity concentration values varied from 14 ± 1 Bq/kg (iThemba soil, HS6) to 256 ± 10 Bq/kg (Kloof sample). The ²³²Th activity concentration values varied from 7 ± 1 Bq/kg (Anstip beach sand) to 53 ± 3 Bq/kg (Rawsonville soil #B31). The ⁴⁰K activity concentration values varied from 60 ± 20 Bq/kg (iThemba soil, HS6) to 190 ± 20 Bq/kg (Kloof sample). The χ² values also varied from sample to sample with the lowest being 12 (Anstip beach sand) and the highest (for samples without contamination of anthropogenic nuclei) being 357 (Rawsonville soil #B28). A high χ² value usually represents incomplete gain drift corrections, improper set of fitting functions, proper inclusion of coincidence summing or the presence of anthropogenic (man made) radionuclei in the source [Hen03]. Activity concentrations of ⁴⁰K, ²³²Th and ²³⁸U were measured at four stationary points on the Kloof mine dump. The fifth stationary point was located on the Southdeep mine dump. These measurements were analysed using the FSA method and fitting by "eye" the standard spectra to the measured spectra using Microsoft Excel. These values were then compared to values obtained using an automated minimization procedure in MATLAB. There was a good correlation between these results except for ²³²Th which had higher concentrations when MATLAB was used, where 16 Bq/kg was the average value in Excel and 24 Bq/kg was the average value in MATLAB.
|
467 |
The Development of Calibrants through Characterization of Volatile Organic Compounds from Peroxide Based Explosives and a Non-target Chemical Calibration CompoundBeltz, Katylynn 13 February 2013 (has links)
Detection canines represent the fastest and most versatile means of illicit material detection. This research endeavor in its most simplistic form is the improvement of detection canines through training, training aids, and calibration. This study focuses on developing a universal calibration compound for which all detection canines, regardless of detection substance, can be tested daily to ensure that they are working with acceptable parameters. Surrogate continuation aids (SCAs) were developed for peroxide based explosives along with the validation of the SCAs already developed within the International Forensic Research Institute (IFRI) prototype surrogate explosives kit. Storage parameters of the SCAs were evaluated to give recommendations to the detection canine community on the best possible training aid storage solution that minimizes the likelihood of contamination. Two commonly used and accepted detection canine imprinting methods were also evaluated for the speed in which the canine is trained and their reliability.
As a result of the completion of this study, SCAs have been developed for explosive detection canine use covering: peroxide based explosives, TNT based explosives, nitroglycerin based explosives, tagged explosives, plasticized explosives, and smokeless powders. Through the use of these surrogate continuation aids a more uniform and reliable system of training can be implemented in the field than is currently used today. By examining the storage parameters of the SCAs, an ideal storage system has been developed using three levels of containment for the reduction of possible contamination. The developed calibration compound will ease the growing concerns over the legality and reliability of detection canine use by detailing the daily working parameters of the canine, allowing for Daubert rules of evidence admissibility to be applied. Through canine field testing, it has been shown that the IFRI SCAs outperform other commercially available training aids on the market. Additionally, of the imprinting methods tested, no difference was found in the speed in which the canines are trained or their reliability to detect illicit materials. Therefore, if the recommendations discovered in this study are followed, the detection canine community will greatly benefit through the use of scientifically validated training techniques and training aids.
|
468 |
Characterization and mitigation of radiation damage on the Gaia Astrometric FieldBrown, Scott William January 2011 (has links)
In November 2012, the European Space Agency (ESA) is planning to launch Gaia, a mission designed to measure with microarcsecond accuracy the astrometric properties of over a billion stars. Microarcsecond astrometry requires extremely accurate positional measurements of individual stellar transits on the focal plane, which can be disrupted by radiation-induced Charge Transfer Inefficiency (CTI). Gaia will suffer radiation damage, impacting on the science performance, which has led to a series of Radiation Campaigns (RCs) being carried out by industry to investigate these issues. The goal of this thesis is to rigorously assess these campaigns and facilitate how to deal with CTI in the data processing. We begin in Chapter 1 by giving an overview of astrometry and photometry, introducing the concept of stellar parallax, and establishing why observing from space is paramount for performing global, absolute astrometry. As demonstrated by Hipparcos, the concept is sound. After reviewing the Gaia payload and discussing how astrometric and photometric parameters are determined in practice, we introduce the issue of radiation-induced CTI and how it may be dealt with. The on board mitigating strategies are investigated in detail in Chapter 2. Here we analyse the effects of radiation damage as a function of magnitude with and without a diffuse optical background, charge injection and the use of gates, and also discover a number of calibration issues. Some of these issues are expected to be removed during flight testing, others will have to be dealt with as part of the data processing, e.g. CCD stitches and the charge injection tail. In Chapter 3 we turn to look at the physical properties of a Gaia CCD. Using data from RC2 we probe the density of traps (i.e. damaged sites) in each pixel and, for the first time, measure the Full Well Capacity of the Supplementary Buried Channel, a part of every Gaia pixel that constrains the passage of faint signals away from the bulk of traps throughout the rest of the pixel. The Data Processing and Analysis Consortium (DPAC) is currently adopting a 'forward modelling' approach to calibrate radiation damage in the data processing. This incorporates a Charge Distortion Model (CDM), which is investigated in Chapter 4. We find that although the CDM performs well there are a number of degeneracies in the model parameters, which may be probed further by better experimental data and a more realistic model. Another way of assessing the performance of a CDM is explored in Chapter 5. Using a Monte Carlo approach we test how well the CDM can extract accurate image parameters. It is found that the CDM must be highly robust to achieve a moderate degree of accuracyand that the fitting is limited by assigning finite window sizes to the image shapes. Finally, in Chapter 6 we summarise our findings on the campaign analyses, the on-board mitigating strategies and on how well we are currently able to handle radiation damage in the data processing.
|
469 |
Optimal pose selection for the identification of geometric and elastostatic parameters of machining robots / Sélection de poses optimales pour l'identification des paramètres géométriques et élasto-statiques de robots d'usinageWu, Yier 15 January 2014 (has links)
La thèse porte sur la sélection de poses optimales pour la calibration géométrique et élasto-statique de robots industriels utilisés pour l'usinage de pièces des grandes dimensions. Une attention particulière est accordée à l'amélioration de la précision de positionnement du robot après compensation des erreurs géométriques et élasto-statiques. Pour répondre aux exigences industrielles des opérations d’usinage, une nouvelle approche pour la définition d'essais pour la calibration de robots sériels et quasi-sériels est proposée. Cette approche est basée sur un nouveau critère de performance, orienté applications industrielles, qui évalue la qualité du plan d'essais pour la calibration via la précision de positionnement du manipulateur après compensation d'erreurs, et tient compte des spécificités de la tâche manufacturière à réaliser au moyen de configurations tests. Contrairement aux travaux précédents, l'approche développée requiert seulement une mesure des positions de points et non d’orientation de corps rigides à l’aide d’un système de mesure externe tel qu’un laser tracker. Cette méthode permet ainsi d'éviter les problèmes de non-homogénéité dans les équations d'identification. Par ailleurs, afin de prendre en compte l'impact du compensateur de gravité,qui induit une chaîne cinématique fermée, le modèle de raideur est étendu en y incluant certains paramètres élasto-statiques dont les valeurs dépendent de la configuration du robot. Une méthodologie pour la calibration des modèles de compensateurs de gravité est ainsi proposée. Les avantages des techniques développées pour la calibration de robots industriels dédiés à des opérations d’usinage sont validés et mis en évidence expérimentalement, à travers la calibration géométrique et élasto-statique du robot industriel KUKAKR-270. / The thesis deals with the optimal pose selection for geometric and elastostatic calibration for industrial robots employed in machining of large parts. Particular attention is paid to the improvement of robot positioning accuracy after compensation of the geometric and elastostatic errors. To meet the industrial requirements of machining operations, a new approach for calibration experiments design for serial and quasi-serial industrial robots is proposed. This approach is based on a new industry-oriented performance measure that evaluates the quality of calibration experiment plan via the manipulator positioning accuracy after error compensation, and takes into account the particularities of prescribed manufacturing task by introducing manipulator test-poses. Contrary to previous works, the developed approach employs an enhanced partial pose measurement method, which uses only direct position measurements from an external device and allows us to avoid the non-homogeneity of relevant identification equations. In order to consider the impact of gravity compensator that creates closed-loop chains, the conventional stiffness model is extended by including in it some configuration dependent elastostatic parameters, which are assumed to be constant for strictly serial robots. Corresponding methodology for calibration of the gravity compensator models is also proposed. The advantages of the developed calibration techniques are validated via experimental study, which deals with geometric and elastostatic calibration of a KUKA KR-270 industrial robot.
|
470 |
Development and evaluation of a method for measuring breast densityDiffey, Jennifer January 2012 (has links)
Introduction: Breast density is an important independent risk factor for breast cancer and is negatively associated with diagnostic sensitivity of mammography. Measurement of breast density can be used to identify women at increased risk of developing breast cancer and those who would benefit from additional imaging. However, measurement techniques are generally subjective and do not reflect the true three-dimensional nature of the breast and its component tissues.Method: A semi-automated method for determining the volume of glandular tissue from digitised mammograms has been developed in Manchester. It requires a calibration device (stepwedge) to be imaged alongside the breast during mammography, with magnification markers on the compression paddle to accurately determine breast thickness. Improvements to the design of the stepwedge and markers have enabled the method to be applied to the screening population for the first time. 1,289 women had their volumetric breast density measured using this method and additionally completed a questionnaire on breast cancer risk factors.Results: The method has demonstrated excellent intra- and inter-observer agreement. The median percentage breast density in the study cohort was 8.4% (interquartile range 4.9 – 14.2%). There was no significant difference between left and right breasts; the difference between MLO and CC views was significant (CC view was denser), but values were closely correlated (r = 0.92, p < 0.001). The median glandular volume was 60.1cm3 and exhibited no significant variation between left/right breasts or CC/MLO views. A number of breast cancer risk factors were found to be significantly correlated with glandular volume and percentage breast density, including age, weight, BMI, parity, current HRT use and current smoking. The strength of correlation was equal to or greater than that of visually assessed mammographic density. Glandular volume and percentage breast density measurements demonstrated strong relationships with visually assessed mammographic density, which has been shown to be highly correlated with risk.Conclusions: These findings are promising and suggest that volumetric breast density measured using this method should be associated with breast cancer risk. However, further work is required to establish this relationship directly. The method will be used in a large study, known as PROCAS (Predicting Risk Of Cancer At Screening) which aims to develop individualised breast cancer risk prediction models; these have the potential to form the basis of tailored screening intervals. Preliminary work has been undertaken to adapt the method for full field digital mammography, which suggests that it is possible to use the integrated digital detector as the calibration device.
|
Page generated in 0.1135 seconds