• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 841
  • 433
  • 245
  • 154
  • 117
  • 25
  • 23
  • 18
  • 14
  • 14
  • 13
  • 11
  • 10
  • 10
  • 7
  • Tagged with
  • 2448
  • 369
  • 339
  • 249
  • 210
  • 209
  • 193
  • 155
  • 148
  • 132
  • 130
  • 117
  • 113
  • 111
  • 109
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
111

A small He3 cryostat for single crystal neutron diffraction applications with a new thermometer calibration technique /

Starr, Earl F. January 1972 (has links)
No description available.
112

Display to Camera Calibration Techniques

Gatt, Philip 01 January 1984 (has links) (PDF)
In today's technology, with digitally controlled optic sensing devices, there exists a need for a fast and accurate calibration procedure. Typical display devices and optic fiber bundles are plagued with inaccuracies. There are many sources of error such as delay, time constants, pixel distortion, pixel bleeding, and noise. The calibration procedure must measure these inaccuracies, and compute a set of correction factors. These correction factors are then used in real time to alter the command data, such that the intended pixels are correctly commanded. This paper discusses a calibration procedure, which employs a special matrix inverse algorithm. This algorithm, which is only applicable to sparse symmetric band diagonal matrices, successfully inverts a 10,000 by 10,000 matrix in less than four seconds on a VAX-11/780. It is estimated that, when using conventional Gauss-Jordan matrix inverse techniques, 4800 hours are required to compute the same matrix inverse. This paper also documents the BlendI routines, which will be used as a calibration procedure for BlendI System.
113

Measuring the ⁷Be Neutrino Flux From the Sun: Calibration of the Borexino Solar Neutrino Detector

Hardy, Steven 06 May 2010 (has links)
The Borexino solar neutrino detector is a real-time liquid scintillator detector designed to measure sub-MeV neutrinos. With its unprecedented level of radio-purity, Borexino is poised to provide the most precise measurements to-date of solar neutrino and geo-antineutrino fluxes. However, in order to reduce the systematic errors to sub-5% levels, the detector must be care- fully calibrated to understand, among other things, the position and energy reconstructions. To that end, the Virginia Tech component of the Borexino collaboration has constructed a system for deploying and locating calibration sources within the detector. The system was used in four separate calibration campaigns and deployed numerous sources in almost 300 locations throughout the detector. The data from the calibrations have already resulted in the reduction of several sources of systematic error by a factor of two or more. With the results from the calibration, the Borexino detector has entered a new era of low- energy, high-precision, neutrino detection. This work was supported by NSF Grant 0802114 / Ph. D.
114

Assessment of Model Validation, Calibration, and Prediction Approaches in the Presence of Uncertainty

Whiting, Nolan Wagner 19 July 2019 (has links)
Model validation is the process of determining the degree to which a model is an accurate representation of the true value in the real world. The results of a model validation study can be used to either quantify the model form uncertainty or to improve/calibrate the model. However, the model validation process can become complicated if there is uncertainty in the simulation and/or experimental outcomes. These uncertainties can be in the form of aleatory uncertainties due to randomness or epistemic uncertainties due to lack of knowledge. Four different approaches are used for addressing model validation and calibration: 1) the area validation metric (AVM), 2) a modified area validation metric (MAVM) with confidence intervals, 3) the standard validation uncertainty from ASME VandV 20, and 4) Bayesian updating of a model discrepancy term. Details are given for the application of the MAVM for accounting for small experimental sample sizes. To provide an unambiguous assessment of these different approaches, synthetic experimental values were generated from computational fluid dynamics simulations of a multi-element airfoil. A simplified model was then developed using thin airfoil theory. This simplified model was then assessed using the synthetic experimental data. The quantities examined include the two dimensional lift and moment coefficients for the airfoil with varying angles of attack and flap deflection angles. Each of these validation/calibration approaches will be assessed for their ability to tightly encapsulate the true value in nature at locations both where experimental results are provided and prediction locations where no experimental data are available. Generally it was seen that the MAVM performed the best in cases where there is a sparse amount of data and/or large extrapolations and Bayesian calibration outperformed the others where there is an extensive amount of experimental data that covers the application domain. / Master of Science / Uncertainties often exists when conducting physical experiments, and whether this uncertainty exists due to input uncertainty, uncertainty in the environmental conditions in which the experiment takes place, or numerical uncertainty in the model, it can be difficult to validate and compare the results of a model with those of an experiment. Model validation is the process of determining the degree to which a model is an accurate representation of the true value in the real world. The results of a model validation study can be used to either quantify the uncertainty that exists within the model or to improve/calibrate the model. However, the model validation process can become complicated if there is uncertainty in the simulation (model) and/or experimental outcomes. These uncertainties can be in the form of aleatory (uncertainties which a probability distribution can be applied for likelihood of drawing values) or epistemic uncertainties (no knowledge, inputs drawn within an interval). Four different approaches are used for addressing model validation and calibration: 1) the area validation metric (AVM), 2) a modified area validation metric (MAVM) with confidence intervals, 3) the standard validation uncertainty from ASME V&V 20, and 4) Bayesian updating of a model discrepancy term. Details are given for the application of the MAVM for accounting for small experimental sample sizes. To provide an unambiguous assessment of these different approaches, synthetic experimental values were generated from computational fluid dynamics(CFD) simulations of a multi-element airfoil. A simplified model was then developed using thin airfoil theory. This simplified model was then assessed using the synthetic experimental data. The quantities examined include the two dimensional lift and moment coefficients for the airfoil with varying angles of attack and flap deflection angles. Each of these validation/calibration approaches will be assessed for their ability to tightly encapsulate the true value in nature at locations both where experimental results are provided and prediction locations where no experimental data are available. Also of interest was to assess how well each method could predict the uncertainties about the simulation outside of the region in which experimental observations were made, and model form uncertainties could be observed.
115

Analysis of Freeway Weaving Areas Using Corridor Simulator and Highway Capacity Manual

Ramachandran, Suresh 11 December 1997 (has links)
Weaving is defined as the crossing of two or more traffic streams traveling in the same direction along a significant length of the highway without the aid of traffic control devices . The traditional methods used for design and operational analysis of a highway is the Highway Capacity Manual (HCM). The traditional weaving methods in the highway capacity manual use road geometry and traffic volume as inputs and provide an estimate of speed as an output. CORSIM is a new computer simulation model developed by Federal Highway Administration (FHWA) for simulation of traffic behavior on integrated urban transportation networks of freeway and surface streets. The intent of this research is to identify the difference in the results by using the new CORSIM simulation and the traditional HCM approach in modeling the weaving sections on a freeway and make recommendations. The research will also compare the modeling strategy and provide analysis of the output. / Master of Science
116

On Vergence Calibration of a Stereo Camera System

Jansson, Sebastian January 2012 (has links)
Modern cars can be bought with camera systems that watch the road ahead. They can be used for many purposes, one use is to alert the driver when other cars are in the path of collision. If the warning system is to be reliable, the input data must be correct. One input can be the depth image from a stereo camera system; one reason for the depth image to be wrong is if the vergence angle between the cameras are erroneously calibrated. Even if the calibration is accurate from production there's a risk that the vergence changes due to temperature variations when the car is started. This thesis proposes one solution for short-time live calibration of a stereo camera system; where the speedometer data available on the CAN-bus is used as reference. The motion of the car is estimated using visual odometry, which will be affected by any errors in the calibration. The vergence angle is then altered virtually until the estimated speed is equal to the reference speed. The method is analyzed for noise and tested on real data. It is shown that detection of calibration errors down to 0.01 degrees is possible under certain circumstances using the proposed method.
117

Příprava kalibračních měrek pro metodu zkoušení vířivými proudy / Preparation of calibration gauges for eddy current testing method

Machovič, Daniel January 2020 (has links)
The aim of the diploma thesis is to clarify the topic of calibration samples used in eddy current testing on equipment in nuclear energy. The theoretical part of the diploma thesis is focused on the eddy current testing method, which belongs to the technology of non-destructive defectoscopy. This part of the work describes the principle of the method, its scope, limitations and dividing of sensors used in this test method. The work briefly describes the physical principle of the laser, its types and operating modes. The practical part of the work is focused on the production of a calibration samples by laser. Another point of the work is the comparison of data obtained from eddy current measurements on samples made by laser and on calibration samples used in practice.
118

Management kalibrací měřidel / Calibration management of gauge

Šmétka, Miroslav January 2010 (has links)
This thesis solves procedures of the management of the calibration laboratory to prove agreement of calibration results in specified types of gauges. These are gauge blocks (Johansson gauge), vernier height gauge, deformation manometer pressure and torque spanner. Furthermore designs documentation in accord with metrological confirmation. These are mainly calibration procedures of these gauges and procedures of definition measurement uncertainly and interpretation of results. Completed procedures of calibration are processed as attachment to this thesis with graphic design like as in experimental technician institute of ground forces.
119

Two Axis Fixture Calibration Utilizing Industrial Robot Artifact Object Touch Sensing

Benton, Thomas Henry 22 June 2020 (has links)
No description available.
120

Design techniques for first pass silicon in SOC radio transceivers

Wilson, James Edward 26 June 2007 (has links)
No description available.

Page generated in 0.1214 seconds