91 |
Calibration of High Dimensional Compressive Sensing Systems: A Case Study in Compressive Hyperspectral ImagingPoon, Phillip, Dunlop, Matthew 10 1900 (has links)
ITC/USA 2013 Conference Proceedings / The Forty-Ninth Annual International Telemetering Conference and Technical Exhibition / October 21-24, 2013 / Bally's Hotel & Convention Center, Las Vegas, NV / Compressive Sensing (CS) is a set of techniques that can faithfully acquire a signal from sub- Nyquist measurements, provided the class of signals have certain broadly-applicable properties. Reconstruction (or exploitation) of the signal from these sub-Nyquist measurements requires a forward model - knowledge of how the system maps signals to measurements. In high-dimensional CS systems, determination of this forward model via direct measurement of the system response to the complete set of impulse functions is impractical. In this paper, we will discuss the development of a parameterized forward model for the Adaptive, Feature-Specific Spectral Imaging Classifier (AFSSI-C), an experimental compressive spectral image classifier. This parameterized forward model drastically reduces the number of calibration measurements.
|
92 |
Radiometric calibration of high resolution UAVSAR data over hilly, forested terrainRiel, Bryan Valmote 10 February 2011 (has links)
SAR backscatter data contain both geometric and radiometric distortions due to underlying topography and the radar viewing geometry. Thus, applications using SAR backscatter data for deriving various scientific products (e.g. above ground biomass) require accurate absolute radiometric calibration. The calibration process involves estimation of the local radar scattering area through knowledge of the imaged terrain, which is often obtained through DEMs. High resolution UAVSAR data over a New Hampshire boreal forest test site was radiometrically calibrated using a low resolution SRTM DEM, and different calibration methods were tested and compared. Heteromorphic methods utilizing DEM integration are able to model scattering area better than homomorphic methods based on the local incidence or projection angle with a resultant backscatter calibration difference of less than 0.5 dB. Additionally, the impact of low DEM resolution on the calibration was investigated through a Fourier analysis of different topographic classes. Power spectra of high-resolution airborne lidar DEMs were used to characterize the topography of steep, moderate, and flat terrain. Thus, errors for a given low resolution DEM associated with a particular topographic class could be quantified through a comparison of its power spectrum with that from the lidar. These errors were validated by comparing DEM slope derived from SRTM and lidar DEMs.
The impact of radiometric calibration on the biomass retrieval capabilities of UAVSAR data was investigated by fitting second-order polynomials to backscatter vs. biomass plots for the HH, HV, and VV polarizations. LVIS RH50 values were used to calculate biomass, and the process was repeated for both uncalibrated and area calibrated UAVSAR images. The calibration improved the $R^2$ values for the polynomial fits by 0.7-0.8 for all three polarizations but had little effect on the polynomial coefficients. The Fourier method for predicting DEM errors was used to predict biomass errors due to the calibration. It was revealed that the greatest errors occurred in the near range of the SAR image and on slopes facing towards the radar. / text
|
93 |
On-orbit Characterizaiton of Hyperspectral ImagersMcCorkel, Joel January 2009 (has links)
Remote Sensing Group (RSG) at the University of Arizona has a long history of using ground-based test sites for the calibration of airborne- and satellite-based sensors. Often, ground-truth measurements at these tests sites are not always successful due to weather and funding availability. Therefore, RSG has also employed automated ground instrument approaches and cross-calibration methods to verify the radiometric calibration of a sensor. The goal in the cross-calibration method is to transfer the calibration of a well-known sensor to that of a different sensor.This dissertation presents a method for determining the radiometric calibration of a hyperspectral imager using multispectral imagery. The work relies on a multispectral sensor, Moderate-resolution Imaging Spectroradiometer (MODIS), as a reference for the hyperspectral sensor Hyperion. Test sites used for comparisons are Railroad Valley in Nevada and a portion of the Libyan Desert in North Africa. A method to predict hyperspectral surface reflectance using a combination of MODIS data and spectral shape information is developed and applied for the characterization of Hyperion. Spectral shape information is based on RSG's historical in situ data for the Railroad Valley test site and spectral library data for the Libyan test site. Average atmospheric parameters, also based on historical measurements, are used in reflectance prediction and transfer to space. Results of several cross-calibration scenarios that differ in image acquisition coincidence, test site, and reference sensor are found for the characterization of Hyperion. These are compared with results from the reflectance-based approach of vicarious calibration, a well-documented method developed by the RSG that serves as a baseline for calibration performance for the cross-calibration method developed here. Cross-calibration provides results that are within 2% of those of reflectance-based results in most spectral regions. Larger disagreements exist for shorter wavelengths studied in this work as well as in spectral areas that experience absorption by the atmosphere.
|
94 |
Impact of Data Collection and Calibration of Water Distribution Models on Model-Based DecisionsSumer, Derya January 2007 (has links)
Mathematical models of water distribution systems (WDS) serve as tools to represent the real systems for many different purposes. Calibration is the process of fine tuning the model parameters so that the real system is well-represented. In practice, calibration is performed considering all information is deterministic. Recent researches have incorporated uncertainties caused by field measurements into the calibration process. Parameter (D-optimality) and predictive (I-optimality) uncertainties have been used as indicators of how well a system is calibrated.This study focuses on a methodology that extends previous work by considering the impact of uncertainty on decisions that are made using the model. A new sampling strategy that would take into account the accuracy needed for different model objectives is proposed.The methodology uses an optimization routine that minimizes square differences between the observed and model calculated head values by adjusting the model parameters. Given uncertainty in measurements, the parameters from this nonlinear regression are imprecise and the model parameter uncertainties are computed using a first order second moment (FOSM) analysis. Parameter uncertainties are then propagated to model prediction uncertainties through a second FOSM analysis. Finally, the prediction uncertainty relationships are embedded in optimization problems to assess the effect of the uncertainties on model-based decisions. Additional data is collected provided that the monetary benefits of reducing uncertainties can be addressed.The proposed procedure is first applied on a small hypothetical network for a system expansion design problem using a steady state model. It is hypothesized that the model accuracy and data required calibrating WDS models with different objectives would require different amount of data. A real-scale network for design and operation problems is studied using the same methodology for comparison. The effect of a common practice, grouping pipes in the system, is also examined in both studies.Results suggest that the cost reductions are related to the convergence of the mean parameter estimates and the reduction of parameter variances. The impact of each factor changes during the calibration process as the parameters become more precise and the design is modified. Identification of the cause of cost changes, however, is not always obvious.
|
95 |
A Study of Predicted Energy Savings and Sensitivity AnalysisYang, Ying 16 December 2013 (has links)
The sensitivity of the important inputs and the savings prediction function reliability for the WinAM 4.3 software is studied in this research. WinAM was developed by the Continuous Commissioning (CC) group in the Energy Systems Laboratory at Texas A&M University. For the sensitivity analysis task, fourteen inputs are studied by adjusting one input at a time within ± 30% compared with its baseline. The Single Duct Variable Air Volume (SDVAV) system with and without the economizer has been applied to the square zone model. Mean Bias Error (MBE) and Influence Coefficient (IC) have been selected as the statistical methods to analyze the outputs that are obtained from WinAM 4.3. For the saving prediction reliability analysis task, eleven Continuous Commissioning projects have been selected. After reviewing each project, seven of the eleven have been chosen. The measured energy consumption data for the seven projects is compared with the simulated energy consumption data that has been obtained from WinAM 4.3. Normalization Mean Bias Error (NMBE) and Coefficient of Variation of the Root Mean Squared Error (CV (RMSE)) statistical methods have been used to analyze the results from real measured data and simulated data.
Highly sensitive parameters for each energy resource of the system with the economizer and the system without the economizer have been generated in the sensitivity analysis task. The main result of the savings prediction reliability analysis is that calibration improves the model’s quality. It also improves the predicted energy savings results compared with the results generated from the uncalibrated model.
|
96 |
Applying Calibration to Improve Uncertainty AssessmentFondren, Mark E 16 December 2013 (has links)
Uncertainty has a large effect on projects in the oil and gas industry, because most aspects of project evaluation rely on estimates. Industry routinely underestimates uncertainty, often significantly. The tendency to underestimate uncertainty is nearly universal. The cost associated with underestimating uncertainty, or overconfidence, can be substantial. Studies have shown that moderate overconfidence and optimism can result in expected portfolio disappointment of more than 30%. It has been shown that uncertainty can be assessed more reliably through look-backs and calibration, i.e., comparing actual results to probabilistic predictions over time. While many recognize the importance of look-backs, calibration is seldom practiced in industry. I believe a primary reason for this is lack of systematic processes and software for calibration.
The primary development of my research is a database application that provides a way to track probabilistic estimates and their reliability over time. The Brier score and its components, mainly calibration, are used for evaluating reliability. The system is general in the types of estimates and forecasts that it can monitor, including production, reserves, time, costs, and even quarterly earnings. Forecasts may be assessed visually, using calibration charts, and quantitatively, using the Brier score. The calibration information can be used to modify probabilistic estimation and forecasting processes as needed to be more reliable. Historical data may be used to externally adjust future forecasts so they are better calibrated. Three experiments with historical data sets of predicted vs. actual quantities, e.g., drilling costs and reserves, are presented and demonstrate that external adjustment of probabilistic forecasts improve future estimates. Consistent application of this approach and database application over time should improve probabilistic forecasts, resulting in improved company and industry performance.
|
97 |
Digital camera calibration for mining applicationsJiang, Lingen Unknown Date
No description available.
|
98 |
Fast waveform metrology : generation, measurement and application of sub-picosecond electrical pulsesSmith, Andrew James Alan January 1996 (has links)
This thesis describes work performed at the National Physical Laboratory to improve the electrical risetime calibration of instruments such as fast sampling oscilloscopes. The majority of the work can be divided into four sections: development of an ultrafast optoelectronic pulse generator; measurement of fast electrical pulses with an electrooptic sampling system; de-embedding of transmission line and transition effects as measured at different calibration reference planes; and calibration of an oscilloscope. The pulse generator is a photoconductive switch based on low-temperature Gallium Arsenide, which has a very fast carrier recombination time. Sub-picosecond electrical pulses are produced by illuminated a planar switch with 200 fs optical pulses from a Ti: sapphire laser system. The pulses are measured using a sampling system with an external electro-optic probe in close proximity to the switch. The electro-optic sampling system, with a temporal resolution better than 500 fs, is used to measure the electrical pulses shape at various positions along the planar transmission line. The results are compared to a pulse propagation model for the line. The effects of different switch geometries are examined. Although the pulse generator produces sub-picosecond pulses near to the point of generation, the pulse is shown to broaden to 7 ps after passing along a length of transmission line and a coplanar-coaxial transition. For a sampling oscilloscope with a coaxial input connector, this effect is significant. Frequency-domain measurements with a network analyser, further electro-optic sampling measurements, and the transmission line model are combined to find the network transfer function of the transition. Using the pulse generator, the electro-optic sampling system and the transition knowledge, a 50 GHz sampling oscilloscope is calibrated. The determination of the instrument step response(nominal risetime 7 ps) is improved from an earlier value of 8.5 -3.5 / +2.9 ps to a new value of 7.4 -2.1 / +1.7 ps with the calibration techniques described.
|
99 |
Calibration and 3D Model Generation for a Low-Cost Structured Light Foot ScannerViswanathan, NavaneethaKannan 21 January 2013 (has links)
The need for custom footwear among the consumers is growing every day. Serious research is being undertaken with regards to the fit and comfort of the footwear. The integration of scanning systems in the footwear and orthotic industries have played a significant role in generating 3D digital representation of the foot for automated measurements from which a custom footwear or an orthosis is manufactured. The cost of such systems is considerably high for many manufacturers due to their expensive components, complex processing algorithms and difficult calibration techniques.
This thesis presents a fast and robust calibration technique for a low-cost 3D laser scanner. The calibration technique is based on determining the mathematical relationship that relates the image coordinates to the real world coordinates. The relationship is determined by mapping the known real world coordinates of a reference object to its corresponding image coordinates by multivariate polynomial regression. With the developed mathematical relationship, 3D data points can be obtained from the 2D images of any object placed in the scanner.
An image processing script is developed to detect the 2D image points of the laser profile in a series of scan images from 8 cameras. The detected 2D image points are reconstructed into 3D data points based on the mathematical model developed by the calibration process. Following that, the output model is achieved by triangulating the 3D data points as a mesh model with vertices and normals. The data is exported as a computer aided design (CAD) software readable format for viewing and measuring.
This method proves to be less complex and the scanner was able to generate 3D models with an accuracy of +/-0.05 cm. The 3D data points from the output model were compared against a reference model scanned by an industrial grade scanner to verify and validate the result. The devised methodology for calibrating the 3D laser scanner can be employed to obtain accurate and reliable 3D data of the foot shape and it has been successfully tested with several participants.
|
100 |
Digital camera calibration for mining applicationsJiang, Lingen 11 1900 (has links)
This thesis examines the issues related to calibrating digital cameras and lenses, which is an essential prerequisite for the extraction of precise and reliable 3D metric information from 2D images. The techniques used to calibrate a Canon PowerShot A70 camera with 5.4 mm zoom lens and a professional single lens reflex camera Canon EOS 1Ds Mark II with 35 mm, 85 mm, 135 mm and 200 mm prime lenses are described. The test results have demonstrated that a high correlation exists among some interior and exterior orientation parameters. The correlations are dependent on the parameters being adjusted and the network configuration. Not all of the 11 interior orientation parameters are significant for modelling the camera and lens behaviour. The first two coefficients K1, K2 would be sufficient to describe the radial distortion effect for most digital cameras. Furthermore, the interior orientation parameters of a digital camera and lens from different calibration tests can change. This work has demonstrated that given a functional model that represents physical effects, a reasonably large number of 3D targets that are well distributed in three-dimensional space, and a highly convergent imaging network, all of the usual parameters can be estimated to reasonable values. / Mining Engineering
|
Page generated in 0.1039 seconds