• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 845
  • 434
  • 244
  • 154
  • 117
  • 26
  • 26
  • 18
  • 14
  • 14
  • 13
  • 11
  • 10
  • 10
  • 7
  • Tagged with
  • 2461
  • 371
  • 339
  • 250
  • 212
  • 209
  • 195
  • 155
  • 148
  • 133
  • 130
  • 117
  • 113
  • 112
  • 110
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
361

Calibration of Hydrologic Models Using Distributed Surrogate Model Optimization Techniques: A WATCLASS Case Study

Kamali, Mahtab 17 February 2009 (has links)
This thesis presents a new approach to calibration of hydrologic models using distributed computing framework. Distributed hydrologic models are known to be very computationally intensive and difficult to calibrate. To cope with the high computational cost of the process a Surrogate Model Optimization (SMO) technique that is built for distributed computing facilities is proposed. The proposed method along with two analogous SMO methods are employed to calibrate WATCLASS hydrologic model. This model has been developed in University of Waterloo and is now a part of Environment Canada MESH (Environment Canada community environmental modeling system called Modèlisation Environmentale Communautaire (MEC) for Surface Hydrology (SH)) systems. SMO has the advantage of being less sensitive to "curse of dimensionality" and very efficient for large scale and computationally expensive models. In this technique, a mathematical model is constructed based on a small set of simulated data from the original expensive model. SMO technique follows an iterative strategy which in each iteration the surrogate model map the region of optimum more precisely. A new comprehensive method based on a smooth regression model is proposed for calibration of WATCLASS. This method has at least two advantages over the previously proposed methods: a)it does not require a large number of training data, b) it does not have many model parameters and therefore its construction and validation process is not demanding. To evaluate the performance of the proposed SMO method, it has been applied to five well-known test functions and the results are compared to two other analogous SMO methods. Since the performance of all SMOs are promising, two instances of WATCLASS modeling Smoky River watershed are calibrated using these three adopted SMOs and the resultant Nash-Sutcliffe numbers are reported.
362

Uncertainty Analysis and the Identification of the Contaminant Transport and Source Parameters for a Computationally Intensive Groundwater Simulation

Yin, Yong January 2009 (has links)
Transport parameter estimation and contaminant source identification are critical steps in the development of a physically based groundwater contaminant transport model. Due to the irreversibility of the dispersion process, the calibration of a transport model of interest is inherently ill-posed, and very sensitive to the simplification employed in the development of the lumped models. In this research, a methodology for the calibration of physically based computationally intensive transport models was developed and applied to a case study, the Reich Farm Superfund site in Toms River, New Jersey. Using HydroGeoSphere, a physically based transient three-dimensional computationally intensive groundwater flow model with spatially and temporally varying recharge was developed. Due to the convergence issue of implementing saturation versus permeability curve (van Genuchten equation) for the large scale models with coarse discretization, a novel flux-based method was innovated to determined solutions for the unsaturated zone for soil-water-retention models. The parameters for the flow system were determined separately from the parameters for the contaminant transport model. The contaminant transport and source parameters were estimated using both approximately 15 years of TCE concentration data from continuous well records and data over a period of approximately 30 years from traditional monitoring wells, and compared using optimization with two heuristic search algorithms (DDS and MicroGA) and a gradient based multi-start PEST. The contaminant transport model calibration results indicate that overall, multi-start PEST performs best in terms of the final best objective function values with equal number of function evaluations. Multi-start PEST also was employed to identify contaminant transport and source parameters under different scenarios including spatially and temporally varying recharge and averaged recharge. For the detailed, transient flow model with spatially and temporally varying recharge, the estimated transverse dispersivity coefficients were estimated to be significantly less than that reported in the literature for the more traditional approach that uses steady-state flow with averaged, less physically based recharge values. In the end, based on the Latin Hypercube sampling, a methodology for comprehensive uncertainty analysis, which accounts for multiple parameter sets and the associated correlations, was developed and applied to the case study.
363

Calibration of ultrasound scanners for surface impedance measurement

Vollmers, Antony Stanley 04 April 2005 (has links)
The primary objective of this research was to investigate the feasibility of calibrating ultrasound scanners to measure surface impedance from reflection data. The method proposed uses calibration curves from known impedance interfaces. This plot, or calibration curve, may then be used, with interpolation, to relate measured grey level to impedance for the characterization of tissue specimens with unknown properties. This approach can be used independent of different medical ultrasound scanner systems to solve for reproducible tissue impedance values without offline data processing and complicated custom electronics. <p>Two medical ultrasound machines from different manufacturers were used in the experiment; a 30 MHz and a 7.5 MHz machine. The calibration curves for each machine were produced by imaging the interfaces of a vegetable oil floating over varying salt solutions. <p>To test the method, porcine liver, kidney, and spleen acoustical impedances were determined by relating measured grey levels to reflection coefficients using calibration curves and then inverting the reflection coefficients to obtain impedance values. The 30 MHz ultrasound machines calculated tissue impedances for liver, kidney, and spleen were 1.476 ± 0.020, 1.486 ± 0.020, 1.471 ± 0.020 MRayles respectively. The 7.5 MHz machines tissue impedances were 1.467 ± 0.088, 1.507 ± 0.088, and 1.457 ± 0.088 MRayles respectively for liver, kidney and spleen. The differences between the two machines are 0.61%, 1.41%, and 0.95% for the impedance of liver, kidney, and spleen tissue, respectively. If the grey level is solely used to characterize the tissue, then the differences are 45.9%, 40.3%, and 39.1% for liver, kidney, and spleen between the two machines. The results support the hypothesis that tissue impedance can be determined using calibration curves and be consistent between multiple machines.
364

Development of Measurement-based Time-domain Models and its Application to Wafer Level Packaging

Kim, Woopoung 02 July 2004 (has links)
In today's semiconductor-based computer and communication technology, system performance is determined primarily by two factors, namely on-chip and off-chip operating frequency. In this dissertation, time-domain measurement-based methods that enable gigabit data transmission in both the IC and package have been proposed using Time-Domain Reflectometry (TDR) equipment. For the evaluation of the time-domain measurement-based method, a wafer level package test vehicle was designed, fabricated and characterized using the proposed measurement-based methods. Electrical issues associated with gigabit data transmission using the wafer-level package test vehicle were investigated. The test vehicle consisted of two board transmission lines, one silicon transmission line, and solder bumps with 50um diameter and 100um pitch. In this dissertation, 1) the frequency-dependent characteristic impedance and propagation constant of the transmission lines were extracted from TDR measurements. 2) Non-physical RLGC models for transmission lines were developed from the transient behavior for the simulation of the extracted characteristic impedance and propagation constant. 3) the solder bumps with 50um diameter and 100um pitch were analytically modeled. Then, the effect of the assembled wafer-level package, silicon substrate and board material, and material interfaces on gigabit data transmission were discussed using the wafer-level package test vehicle. Finally, design recommendations for the wafer-level package on integrated board were proposed for gigabit data transmission in both the IC and package.
365

Self-sampled All-MOS ASK Demodulator & Synchronous DAC with Self-calibration for Bio-medical Applications

Chen, Chih-Lin 29 June 2010 (has links)
This thesis includes two topics, which are a Self-sampled ALL-MOS ASK Demodulator and a Synchronous DAC with Self-calibration. An all-MOS ASK demodulator with a wide bandwidth for lower ISM band applications is presented in the first half of this thesis. The chip area is reduced without using any passive element. It is very compact to be integrated in an SOC (system-on-chip) for wireless biomedical applications, particularly in biomedical implants. Because of low area cost and low power consumption, the proposed design is also easily to be integrated in other mobile medical devices. The self-sampled loop with a MOS equivalent capacitor compensation mechanism enlarges the bandwidth, which is more than enough to be adopted in any application using lower ISM bands. To demonstrate this technique, an ASK demodulator prototype is implemented and measured using a TSMC 0.35 £gm standard CMOS process. The second topic reveals a synchronous DAC with self-calibration. The main idea is to use a calibration circuit to overcome large error of output voltage caused by the variation of the unit capacitor. When DAC is not calibrated, INL is larger than 1.7 LSB. After calibrated, INL is improved to be smaller than 0.5 LSB. To demonstrate this technique, a DAC prototype is implemented and measured using a TSMC 0.18 £gm standard CMOS process.
366

Design and Assembly of a Rotational Laser Scanning System for Small Scale Seabed Roughness

Li, Jiu-min 29 July 2004 (has links)
This paper reports the design and development of an underwater laser scanning system to measure the geometry of underwater objects. The application of structure light scanning method requires a calibrated CCD camera as the input device. Because the underwater environment is by far different from that in the air. Conventional calibration methods adopted in the air can not be applied for the underwater cases. In this paper we propose an algorithm which is analogous to the idea of longitudes/latitude in map projection to calibration the CCD. The calibration board pattern is fabricated by laying vertical and horizontal grid dots of 5cm span with an NC milling machine. To obtain the higher accuracy, we redesign the laser source holder to make the board and laser scan line coplane. We use a new laser that is focus adjustable. So we can capture clearer image of the edge on the target. Then, we calibrate the CCD camera with the calibration board. For testing our new system, two test pieces are used. One is sine waves ripples with varying amplitudes from 8mm to 3 mm. The other one is a rough surface with know spatial power spectrum. Scanning results show that: Scanning from rough 1 meter away, the absolute error for the sine wave ripples is less than 1mm along vertical direction. The power spectrum for the rough surface is accurate to the order of 3 to 5mm wave number. In order to survive in the harsh underwater environment, we design and make a rotational scanning system. The system was designed as an automatic image-capturing system, utilizing single board computer as control plane to work in conjunction with PLC(Programmable Logic Controller) for System power management. When using two 12V batteries as main power source, obtaining samples once per hour, capturing 360 images per operation, the system may run for approximately 39 hours.
367

The study of applying wavelet transform to fiber optic sensors

Wang, Yi-Ju 07 August 2006 (has links)
The main advantage of wavelet transform relative to its Fourier analysis counterpart is its suitability to deal with transient signals. Furthermore, wavelet packet transform has very good frequency analytic ability with the result that it is developing in very fast speed and widespread researched and used in industry and academia. We study the characteristics of fiber optic sensors by applying wavelet transform. Hence, in this paper, the traditional Fourier analysis is taken as a basis, and the wavelet packet analysis is taken as a comparison. The major objects include: (1) calibration of hydrophones; (2) vibration measurement. In calibration of hydrophones, the experimental results show a 2.72 dB re V/£gPa inaccuracies and a 5.3 dB re V/£gPa standard deviation by Fourier analysis, but 0.5 dB inaccuracies and 1.6 dB re V/£gPa standard deviation by wavelet packet analysis. It shows that the wavelet packet analysis has better analytic ability than that of traditional Fourier analysis. In vibration measurement, we utilize FBG interferometers to measure stable vibration. The experimental results denote that wavelet packet analysis has excellent frequency analytic ability as Fourier analysis. Besides, in obtaining transient characteristic signals induced by falling stones, the results appear that wavelet packet analysis has better resolution and identification capability relative to Fourier analysis.
368

An Active Camera Calibration Method with XYZ 3D Table

Tseng, Ching-I 12 July 2000 (has links)
The technology of machine vision is board applied in many aspects such as industrial inspection, medical image processing, remote sensing and nanotechnology. It recovers useful information about a scene from its two-dimension projections. This recovery requires the inversion of a many-to-one mapping. But we usually lose some important data for not exactly correct mapping. It might occur from lens distortion, rotation and perspective distortion, and non-ideal vision systems. Camera calibration can compensate for these ill-conditions. In this thesis I present an active calibration technique derived from Song¡¦s research (1996) for calibrating the camera intrinsic parameters. It requires no reference object and directly uses the images of the environment. We only have to control the camera acting a series translational motion by the XYZ 3-D table.
369

Calibration of CCD Camera for Underwater Laser Scanning System

Hwaung, Tien-Chen 04 February 2002 (has links)
To estimate the correct dimension of the target on the underwater, we can use CCD camera and cast laser light strip onto the target, and then observe the displacement of laser light to get the dimension. Since the laser light will show on different situation, it's due to the surface of the target is not smooth. When we get the image from CCD camera, we need to calibrate the displacement of the laser light and return to the actual dimension of the target on the underwater. We know the optical distortion and non-linearity of the CCD camera will influence to get the correct image, also the location of camera is. That's the reason we need to calibrate the camera first. It was a mathematical way to explain the calibration of CCD camera non-linearity before. On this subject, we lay vertical and horizontal grid lines of 50 mm span on an acrylic plate. These grid lines are same as the longitudes and latitudes of the map. We estimate the target with the pair of interpolated longitude and latitude same as to be used to estimate the location of the point in the world coordinate system. And choose some targets with different size and form to use to verify the approach. By the way, we also test if there is any influence for the clear of water. The results indicate the error is under 3 \% when we catch the image on the underwater by a calibrated CCD camera.
370

Vision based navigation system for autonomous proximity operations: an experimental and analytical study

Du, Ju-Young 17 February 2005 (has links)
This dissertation presents an experimental and analytical study of the Vision Based Navigation system (VisNav). VisNav is a novel intelligent optical sensor system invented by Texas A&M University recently for autonomous proximity operations. This dissertation is focused on system calibration techniques and navigation algorithms. This dissertation is composed of four parts. First, the fundamental hardware and software design configuration of the VisNav system is introduced. Second, system calibration techniques are discussed that should enable an accurate VisNav system application, as well as characterization of errors. Third, a new six degree-of-freedom navigation algorithm based on the Gaussian Least Squares Differential Correction is presented that provides a geometrical best position and attitude estimates through batch iterations. Finally, a dynamic state estimation algorithm utilizing the Extended Kalman Filter (EKF) is developed that recursively estimates position, attitude, linear velocities, and angular rates. Moreover, an approach for integration of VisNav measurements with those made by an Inertial Measuring Unit (IMU) is derived. This novel VisNav/IMU integration technique is shown to significantly improve the navigation accuracy and guarantee the robustness of the navigation system in the event of occasional dropout of VisNav data.

Page generated in 0.0742 seconds