• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 840
  • 432
  • 245
  • 154
  • 117
  • 25
  • 23
  • 18
  • 14
  • 14
  • 13
  • 11
  • 10
  • 10
  • 7
  • Tagged with
  • 2445
  • 369
  • 339
  • 249
  • 210
  • 209
  • 193
  • 154
  • 148
  • 132
  • 130
  • 117
  • 113
  • 111
  • 109
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
151

Hydrological Modeling of the Upper South Saskatchewan River Basin: Multi-basin Calibration and Gauge De-clustering Analysis

Dunning, Cameron January 2009 (has links)
This thesis presents a method for calibrating regional scale hydrologic models using the upper South Saskatchewan River watershed as a case study. Regional scale hydrologic models can be very difficult to calibrate due to the spatial diversity of their land types. To deal with this diversity, both a manual calibration method and a multi-basin automated calibration method were applied to a WATFLOOD hydrologic model of the watershed. Manual calibration was used to determine the effect of each model parameter on modeling results. A parameter set that heavily influenced modeling results was selected. Each influential parameter was also assigned an initial value and a parameter range to be used during automated calibration. This manual calibration approach was found to be very effective for improving modeling results over the entire watershed. Automated calibration was performed using a weighted multi-basin objective function based on the average streamflow from six sub-basins. The initial parameter set and ranges found during manual calibration were subjected to the optimization search algorithm DDS to automatically calibrate the model. Sub-basin results not involved in the objective function were considered for validation purposes. Automatic calibration was deemed successful in providing watershed-wide modeling improvements. The calibrated model was then used as a basis for determining the effect of altering rain gauge density on model outputs for both a local (sub-basin) and global (watershed) scale. Four de-clustered precipitation data sets were used as input to the model and automated calibration was performed using the multi-basin objective function. It was found that more accurate results were obtained from models with higher rain gauge density. Adding a rain gauge did not necessarily improve modeled results over the entire watershed, but typically improved predictions in the sub-basin in which the gauge was located.
152

Calibration and Analysis of the MESH Hydrological Model applied to Cold Regions

MacLean, Angela 30 September 2009 (has links)
Concerns regarding climate change have brought about an increased interest in cold region hydrology, leading to the formation of the IP3 research network. This work is part of the IP3 Network, which has the overall goal to evaluate and demonstrate improved predictions of hydrological and atmospheric fields for cold regions. As such this thesis involves a series of calibration and validation experiments on the MESH hydrological model (used by IP3 for predictions) with two cold region case studies. The first case study is the very well instrumented Reynolds Creek Experimental Watershed in Idaho, USA and the second case study is the Wolf Creek watershed in the Yukon Territory. As the MESH model is still in the development phase, a critical component of model development is a thorough analysis of model setup and performance. One intention of this research is to provide feedback for future development of the MESH hydrological model. The Reynolds Creek site was modeled as part of this thesis work. This site was chosen based on the long term, highly distributed and detailed data set. The second site, Wolf Creek, was used for a simplified case study. Models of both case study sites were calibrated and validated to carefully evaluate model performance. Reynolds Creek was calibrated as a single objective problem as well as multi-objective problem using snow water equivalent data and streamflow data for multiple sites. The hydrological simulations for Wolf Creek were fair; further calibration effort and a more detailed examination of the model setup would have likely produced better results. Calibration and validation of Reynolds Creek produced very good results for streamflow and snow water equivalent at multiple sites though out the watershed. Calibrating streamflow generated a very different optimal parameter set compared to calibrating snow water equivalent or calibrating to both snow water equivalent and streamflow in a multi-objective framework. A weighted average multi-objective approach for simultaneously calibrating to snow water equivalent and streamflow can be effective as it yields a reasonable solution that improves the single objective snow water equivalent results without degrading the single objective streamflow results.
153

Hydrological Modeling of the Upper South Saskatchewan River Basin: Multi-basin Calibration and Gauge De-clustering Analysis

Dunning, Cameron January 2009 (has links)
This thesis presents a method for calibrating regional scale hydrologic models using the upper South Saskatchewan River watershed as a case study. Regional scale hydrologic models can be very difficult to calibrate due to the spatial diversity of their land types. To deal with this diversity, both a manual calibration method and a multi-basin automated calibration method were applied to a WATFLOOD hydrologic model of the watershed. Manual calibration was used to determine the effect of each model parameter on modeling results. A parameter set that heavily influenced modeling results was selected. Each influential parameter was also assigned an initial value and a parameter range to be used during automated calibration. This manual calibration approach was found to be very effective for improving modeling results over the entire watershed. Automated calibration was performed using a weighted multi-basin objective function based on the average streamflow from six sub-basins. The initial parameter set and ranges found during manual calibration were subjected to the optimization search algorithm DDS to automatically calibrate the model. Sub-basin results not involved in the objective function were considered for validation purposes. Automatic calibration was deemed successful in providing watershed-wide modeling improvements. The calibrated model was then used as a basis for determining the effect of altering rain gauge density on model outputs for both a local (sub-basin) and global (watershed) scale. Four de-clustered precipitation data sets were used as input to the model and automated calibration was performed using the multi-basin objective function. It was found that more accurate results were obtained from models with higher rain gauge density. Adding a rain gauge did not necessarily improve modeled results over the entire watershed, but typically improved predictions in the sub-basin in which the gauge was located.
154

STIFFNESS CALIBRATION OF ATOMIC FORCE MICROSCOPY PROBES UNDER HEAVY FLUID LOADING

Kennedy, Scott Joseph January 2010 (has links)
<p>This research presents new calibration techniques for the characterization of atomic force microscopy cantilevers. Atomic force microscopy cantilevers are sensors that detect forces on the order of pico- to nanonewtons and displacements on the order of nano- to micrometers. Several calibration techniques exist with a variety of strengths and weaknesses. This research presents techniques that enable the noncontact calibration of the output sensor voltage-to-displacement sensitivity and the cantilever stiffness through the analysis of the unscaled thermal vibration of a cantilever in a liquid environment.</p><p>A noncontact stiffness calibration method is presented that identifies cantilever characteristics by fitting a dynamic model of the cantilever reaction to a thermal bath according to the fluctuation-dissipation theorem. The fitting algorithm incorporates an assumption of heavy fluid loading, which is present in liquid environments.</p><p>The use of the Lorentzian line function and a variable-slope noise model as an alternate approach to the thermal noise method was found to reduce the difference between calibrations preformed on the same cantilever in air and in water relative to existing techniques. This alternate approach was used in combination with the new stiffness calibration technique to determine the voltage-to-displacement sensitivity without requiring contact loading of the cantilever.</p><p>Additionally, computational techniques are presented in the investigation of alternate cantilever geometries, including V-shaped cantilevers and warped cantilevers. These techniques offer opportunities for future research to further reduce the uncertainty of atomic force microscopy calibration.</p> / Dissertation
155

A High Performance Current-Balancing Instrumentation Amplifier for ECG Monitoring Systems and An Instrumentation Amplifier with CMRR Self-Calibration

Lim, Kian-siong 19 July 2010 (has links)
The thesis is composed of tow topics: a high performance current-balancing instrumentation amplifier (IA) for ECG (Electrocardiogram) monitoring systems and an IA with CMRR (Common-Mode Rejection Ratio) self-calibration. In the first topic, a high common mode rejection ratio (CMRR) and a low input referred noise instrumentation amplifier (IA) is presented for ECG applications. A high pass filter (HPF) with a small-Gm OTA using a current division technique is employed to attain small transconductance, which needs only a small capacitor in the HPF such that the integration on silicon is highly feasible. The proposed design is carried out by TSMC standard 0.18 £gm CMOS technology. CMRR is found to be 127 dB and the voltage gain is 45 dB according to the simulation results. The second topic discloses an instrumentation amplifier with CMRR self-calibration capability. The propose design is also carried out by TSMC standard 0.18 £gm CMOS technology. To achieve a CMRR of more than 80 dB, a calibration resistance string and a detection circuit have been utilized. The DC gain of the proposed design is 60 dB and the frequency bandwidth is bound in 10 KHz, which is adaptable for biomedical signal acquisition applications.
156

A study of augmented reality for posting information to building images

Yang, Yi-Jang 08 September 2010 (has links)
Geographical image data efficiently help people with wayfinding when in an unfamiliar environment. However, since the display modes of geographical image data such as 2D and 3D virtual reality could not meet with users' needs anymore, the new technique augmented reality (AR) has then become a better and effective solution to graphics Augmented reality is a kind of 3D display technique by computer vision, in which 3D virtual objects are combined with 3D real environment interactively, dynamically, and in real-time. It will bring more advantages especially to the display of building spatial data. The research aims to find out more spatial information by seeing-through buildings when we stay outside it. The approach is firstly to use a single camera to capture building features with serial images, and secondly to do building recognition and tracking between reference images and serial images by Speeded-Up Robust Features (SURF) algorithm. Thirdly, the relationship of points correspondence between serial images are then applied to estimate camera parameters via computer vision technique. Finally, the 3D model map of buildings can augment to building images according to the camera parameters.
157

The Method of Manufactured Universes for Testing Uncertainty Quantification Methods

Stripling, Hayes Franklin 2010 December 1900 (has links)
The Method of Manufactured Universes is presented as a validation framework for uncertainty quantification (UQ) methodologies and as a tool for exploring the effects of statistical and modeling assumptions embedded in these methods. The framework calls for a manufactured reality from which "experimental" data are created (possibly with experimental error), an imperfect model (with uncertain inputs) from which simulation results are created (possibly with numerical error), the application of a system for quantifying uncertainties in model predictions, and an assessment of how accurately those uncertainties are quantified. The application presented for this research manufactures a particle-transport "universe," models it using diffusion theory with uncertain material parameters, and applies both Gaussian process and Bayesian MARS algorithms to make quantitative predictions about new "experiments" within the manufactured reality. To test further the responses of these UQ methods, we conduct exercises with "experimental" replicates, "measurement" error, and choices of physical inputs that reduce the accuracy of the diffusion model's approximation of our manufactured laws. Our first application of MMU was rich in areas for exploration and highly informative. In the case of the Gaussian process code, we found that the fundamental statistical formulation was not appropriate for our functional data, but that the code allows a knowledgable user to vary parameters within this formulation to tailor its behavior for a specific problem. The Bayesian MARS formulation was a more natural emulator given our manufactured laws, and we used the MMU framework to develop further a calibration method and to characterize the diffusion model discrepancy. Overall, we conclude that an MMU exercise with a properly designed universe (that is, one that is an adequate representation of some real-world problem) will provide the modeler with an added understanding of the interaction between a given UQ method and his/her more complex problem of interest. The modeler can then apply this added understanding and make more informed predictive statements.
158

Measurement of Small Scale Roughness of Seabed with Laser Scanning

Cheng, Ming-Hsiang 12 July 2004 (has links)
This work studies the application of laser structured light scanning to measure the small scale roughness of seabed. We use a CCD camera to capture the dislocation of laser light. The location of the laser light in pixel coordinates can be converted into world coordinates if the CCD camera is calibrated. We propose an algorithm which is analogous to the idea of longitudes/latitudes in map projection. The idea is to place a calibration board to be aligned with the laser scanning sheet. On the calibration board, grid points of 50mm are laid to represent the intersection of the longitudes and latitudes. The position of a point in pixel coordinates can be obtained by referring to its neighboring graticule. We designed three experiments to verify the accuracy of the system: The first experiment consists of measuring the distance between feature points on the calibration board, then check and correct the optic distortion effects of the lenses. The second experiment is to measure the slice of laser scanning image of a known object, and check the accuracy of our laser scanning system by measuring the object's height and width. In the third experiment, we measure an object which has a small height variety of its surface, to test the resolution of the system. The results indicate that the error is under 1%, only then that we proceed with the design, analysis, and measurement of artificial seabed. The artificial seabed model is made by using a 210mm * 210mm * 30mm acrylic board with sand ripples forms in the 150mm * 150mm square. The amplitude of the ripples is no higher/larger than ¡Ó 8mm, and no lower/smaller than ¡Ó 1.5mm. Contour map of the sand ripples would be plotted to analyze the results obtained from the measurements. The analysis is carried out by obtaining slice data from a reconstructed surface of the sand ripples, then compare it with the theoretical values. From the result we know that the error between sand dune ideal wave index and measured index is in the range of ¡Ó 2mm. To further test the system's tolerance with turbidity, we incorporate conditions which would alter environmental turbidity into the seabed experiments before running the experiments for analysis. The results show that the system is able to maintain a stable performance in an environment below 2.3 NTU (Nephelometric Turbidity Unit), and the error between ideal sand dune ideal wave index and measured index is still in the range of ¡Ó 2mm.
159

Parameter Calibration for the Tidal Model by the Global Search of the Genetic Algorithm

Chung, Shih-Chiang 12 September 2006 (has links)
The current study has applied the Genetic Algorithm (GA) for the boundary parameters calibration in the hydrodynamic-based tidal model. The objective is to minimize the deviation between the estimated results acquired from the simulation model and the real tidal data along Taiwan coast. The manual trial-error has been widely used in the past, but such approach is inefficient due to the complexity posed by the tremendous amounts of parameters. Fortunately, with the modern computer capability, some automatic searching processes, in particular GA, can be implemented to handle the large data set and reduce the human subjectivity when conducting the calibration. Besides, owing to the efficient evolution procedures, GA can find better solutions in a shorter time compared to the manual approach. Based on the preliminary experiments of the current study, the integration of GA with the hydrodynamic-based tidal model can improve the accuracy of simulation.
160

Simultaneous calibration of a microscopic traffic simulation model and OD matrix

Kim, Seung-Jun 30 October 2006 (has links)
With the recent widespread deployment of intelligent transportation systems (ITS) in North America there is an abundance of data on traffic systems and thus an opportunity to use these data in the calibration of microscopic traffic simulation models. Even though ITS data have been utilized to some extent in the calibration of microscopic traffic simulation models, efforts have focused on improving the quality of the calibration based on aggregate form of ITS data rather than disaggregate data. In addition, researchers have focused on identifying the parameters associated with car-following and lane-changing behavior models and their impacts on overall calibration performance. Therefore, the estimation of the Origin-Destination (OD) matrix has been considered as a preliminary step rather than as a stage that can be included in the calibration process. This research develops a methodology to calibrate the OD matrix jointly with model behavior parameters using a bi-level calibration framework. The upper level seeks to identify the best model parameters using a genetic algorithm (GA). In this level, a statistically based calibration objective function is introduced to account for disaggregate form of ITS data in the calibration of microscopic traffic simulation models and, thus, accurately replicate dynamics of observed traffic conditions. Specifically, the Kolmogorov-Smirnov test is used to measure the "consistency" between the observed and simulated travel time distributions. The calibration of the OD matrix is performed in the lower level, where observed and simulated travel times are incorporated into the OD estimator for the calibration of the OD matrix. The interdependent relationship between travel time information and the OD matrix is formulated using a Extended Kalman filter (EKF) algorithm, which is selected to quantify the nonlinear dependence of the simulation results (travel time) on the OD matrix. The two test sites are from an urban arterial and a freeway in Houston, Texas. The VISSIM model was used to evaluate the proposed methodologies. It was found that that the accuracy of the calibration can be improved by using disaggregated data and by considering both driver behavior parameters and demand.

Page generated in 0.142 seconds