341 |
Reflectivity Measurement System Development and CalibrationPeng, Tao January 2007 (has links)
Accurate assessment of road luminance provided by overhead streetlights helps to optimize the visibility of objects on the road and therefore promotes driver safety, while minimizing energy consumption. To calculate road luminance, the road surface reflectivity has to be known. Odyssey Energy Limited has developed a prototype system that has the potential to determine the road reflectivity properties at high speed. In this thesis, an investigation into the prototype system has been conducted and further enhancement and redesign has been done. A portable on-site road surface reflectivity measurement system that complies with the Commission Internationale de I' Eclairage (CIE) standard was developed. The road test of this new system has been carried out on a series of Hamilton city roads. It proved that the new system is capable of measuring the road surface reflectivity and classifying the road into its appropriate R class according to the CIE standards specified in street lighting design criteria. Later the OEL prototype system was calibrated against the new system to find out the correlation between the two systems.
|
342 |
Calibration of numerical models with application to groundwater flow in the Willunga Basin, South AustraliaRasser, Paul Edward January 2001 (has links)
The process of calibrating a numerical model is examined in this thesis with an application to the flow of groundwater in the Willunga Basin in South Australia. The calibration process involves estimating unknown parameters of the numerical model so that the output obtained from the model is comparable with data that is observed in the field. Three methods for calibrating numerical models are discussed, these being the steepest descent method, the nonlinear least squares method, and a new method called the response function method. The response function method uses the functional relationship between the model's output and the unknown parameters to determine improved estimates for the unknown parameters. The functional relationships are based on analytic solutions to simplifed model problems or from previous experience. The three calibration methods are compared using a simple function involving one parameter, an idealised steady state model of groundwater flow and an idealised transient model of groundwater flow. The comparison shows that the response function method produces accurate estimates in the least amount of iterations. A numerical model of groundwater flow in the Willunga Basin in South Australia has been developed and the response function method used to estimated the unknown parameters for this model. The model of the Willunga Basin has been used to examine the sustainable yield of groundwater from the basin. The effect on groundwater levels in the basin using current and estimated extraction rates from the literature for sustainable yield has been examined. The response function method has also been used to estimate the rate of extraction to return the groundwater levels at a specific location to a desirable level. / Thesis (M.Sc.)--Department of Applied Mathematics, 2001.
|
343 |
Modélisation par la méthode des éléments discrets d'impacts de blocs rocheux sur structures de protection type merlonsPlassiard, Jean-Patrick 07 December 2007 (has links) (PDF)
Les ouvrages type merlons, construits pour se prémunir contre les chutes de blocs rocheux, sont dimensionnés en pratique au moyen de méthodes empiriques. Afin d'optimiser la géométrie de ces constructions leur modélisation a été entreprise. La méthode aux éléments discrets est utilisée en raison des fortes restructurations du remblai constitutif pouvant intervenir durant un impact. A partir d'un état de l'art, les caractéristiques mécaniques du remblai sont évaluées. La calibration des paramètres quasi-statiques du matériau modèle est alors réalisée par simulations d'essais triaxiaux. Les paramètres gérant le comportement dynamique sont calibrés à leur tour par simulations d'impacts à énergies modérées. Le remblai modèle obtenu est validé par la modélisation d'essais d'impacts expérimentaux dont la gamme d'énergies correspond à celle rencontrée pour les merlons. Il est alors utilisé pour simuler des impacts sur merlons. Plusieurs tailles d'ouvrages sont étudiées, répondant chacune à des gammes d'énergies différentes. Une analyse paramétrique portant sur divers aspects du bloc et de l'ouvrage est réalisée. Elle permet de définir les variables principales gérant le phénomène d'impact. La variation couplée de ces dernières est réalisée afin d'établir leur influence sur la capacité du merlon à contenir le bloc.
|
344 |
WCDMA User Equipment Output Power Calibration / Uteffektskalibrering för WCDMA-telefonFolkeson, Tea January 2003 (has links)
<p>To save time in Flextronics high volume production, the time for test and calibration of mobile telephones need to be as short and accurate as possible. In the wideband code division multiple access (WCDMA) case, the output power calibration is the most critical calibration concerning accuracy. The aim with this thesis was to find a faster calibration method than the one that exists today and still retain accuracy. </p><p>The Third Generation Partnership Project (3GPP) outlines the requirements of the output power and they must be thoroughly considered when choosing calibration method. Measurement accuracy and the behavior of the transmitter chain parameters also must be considered. </p><p>The output power in the WCDMA phone studied is controlled by seven parameters. The parameters are characterized in this thesis, and are found to be too hardware dependent to be predicted or to be seen as predictions from each other. </p><p>Since no parameter predictions are possible it was stated that all parameters have to be measured, and a new way of measuring them in a faster way is proposed. The principle of the new measurement method is presented, and the implemented software is tested and evaluated. The new method mainly makes use of the spectrum analyzer zero span function. </p><p>The evaluation shows that the new method is faster than the original and retains accuracy. The measurement uncertainties even seem to diminish, which implicates decreased temperature dependence due to faster measurement time.</p>
|
345 |
Calibration of parameters for the Heston model in the high volatility period of marketMaslova, Maria January 2008 (has links)
<p>The main idea of our work is the calibration parameters for the Heston stochastic volatility model. We make this procedure by using the OMXS30 index from the NASDAQ OMX Nordic Exchange Market. We separate our data into the stable period and high-volatility period on this Nordic Market. Deviation detection problem are solved using the Bayesian analysis of change-points. We estimate parameters of the Heston model for each of periods and make some conclusions.</p>
|
346 |
Lens Distortion Calibration Using Point CorrespondencesStein, Gideon P. 01 December 1996 (has links)
This paper describes a new method for lens distortion calibration using only point correspondences in multiple views, without the need to know either the 3D location of the points or the camera locations. The standard lens distortion model is a model of the deviations of a real camera from the ideal pinhole or projective camera model.Given multiple views of a set of corresponding points taken by ideal pinhole cameras there exist epipolar and trilinear constraints among pairs and triplets of these views. In practice, due to noise in the feature detection and due to lens distortion these constraints do not hold exactly and we get some error. The calibration is a search for the lens distortion parameters that minimize this error. Using simulation and experimental results with real images we explore the properties of this method. We describe the use of this method with the standard lens distortion model, radial and decentering, but it could also be used with any other parametric distortion models. Finally we demonstrate that lens distortion calibration improves the accuracy of 3D reconstruction.
|
347 |
Vision based navigation system for autonomous proximity operations: an experimental and analytical studyDu, Ju-Young 17 February 2005 (has links)
This dissertation presents an experimental and analytical study of the Vision Based Navigation system (VisNav). VisNav is a novel intelligent optical sensor system invented by Texas A&M University recently for autonomous proximity operations. This dissertation is focused on system calibration techniques and navigation algorithms. This dissertation is composed of four parts. First, the fundamental hardware and software design configuration of the VisNav system is introduced. Second, system calibration techniques are discussed that should enable an accurate VisNav system application, as well as characterization of errors. Third, a new six degree-of-freedom navigation algorithm based on the Gaussian Least Squares Differential Correction is presented that provides a geometrical best position and attitude estimates through batch iterations. Finally, a dynamic state estimation algorithm utilizing the Extended Kalman Filter (EKF) is developed that recursively estimates position, attitude, linear velocities, and angular rates. Moreover, an approach for integration of VisNav measurements with those made by an Inertial Measuring Unit (IMU) is derived. This novel VisNav/IMU integration technique is shown to significantly improve the navigation accuracy and guarantee the robustness of the navigation system in the event of occasional dropout of VisNav data.
|
348 |
Design of frequency synthesizers for short range wireless transceiversValero Lopez, Ari Yakov 30 September 2004 (has links)
The rapid growth of the market for short-range wireless devices, with standards such as Bluetooth and Wireless LAN (IEEE 802.11) being the most important, has created a need for highly integrated transceivers that target drastic power and area reduction while providing a high level of integration. The radio section of the devices designed to establish communications using these standards is the limiting factor for the power reduction efforts. A key building block in a transceiver is the frequency synthesizer, since it operates at the highest frequency of the system and consumes a very large portion of the total power in the radio. This dissertation presents the basic theory and a design methodology of frequency synthesizers targeted for short-range wireless applications. Three different examples of synthesizers are presented. First a frequency synthesizer integrated in a Bluetooth receiver fabricated in 0.35μm CMOS technology. The receiver uses a low-IF architecture to downconvert the incoming Bluetooth signal to 2MHz. The second synthesizer is integrated within a dual-mode receiver capable of processing signals of the Bluetooth and Wireless LAN (IEEE 802.11b) standards. It is implemented in BiCMOS technology and operates the voltage controlled oscillator at twice the required frequency to generate quadrature signals through a divide-by-two circuit. A phase switching prescaler is featured in the synthesizer. A large capacitance is integrated on-chip using a capacitance multiplier circuit that provides a drastic area reduction while adding a negligible phase noise contribution. The third synthesizer is an extension of the second example. The operation range of the VCO is extended to cover a frequency band from 4.8GHz to 5.85GHz. By doing this, the synthesizer is capable of generating LO signals for Bluetooth and IEEE 802.11a, b and g standards. The quadrature output of the 5 - 6 GHz signal is generated through a first order RC - CR network with an automatic calibration loop. The loop uses a high frequency phase detector to measure the deviation from the 90° separation between the I and Q branches and implements an algorithm to minimize the phase errors between the I and Q branches and their differential counterparts.
|
349 |
Optical Flow Based Structure from MotionZucchelli, Marco January 2002 (has links)
No description available.
|
350 |
Calibration of parameters for the Heston model in the high volatility period of marketMaslova, Maria January 2008 (has links)
The main idea of our work is the calibration parameters for the Heston stochastic volatility model. We make this procedure by using the OMXS30 index from the NASDAQ OMX Nordic Exchange Market. We separate our data into the stable period and high-volatility period on this Nordic Market. Deviation detection problem are solved using the Bayesian analysis of change-points. We estimate parameters of the Heston model for each of periods and make some conclusions.
|
Page generated in 0.072 seconds