Spelling suggestions: "subject:"metaparameter"" "subject:"afterparameter""
271 |
Evaluating and developing parameter optimization and uncertainty analysis methods for a computationally intensive distributed hydrological modelZhang, Xuesong 15 May 2009 (has links)
This study focuses on developing and evaluating efficient and effective parameter
calibration and uncertainty methods for hydrologic modeling. Five single objective
optimization algorithms and six multi-objective optimization algorithms were tested for
automatic parameter calibration of the SWAT model. A new multi-objective
optimization method (Multi-objective Particle Swarm and Optimization & Genetic
Algorithms) that combines the strengths of different optimization algorithms was
proposed. Based on the evaluation of the performances of different algorithms on three
test cases, the new method consistently performed better than or close to the other
algorithms.
In order to save efforts of running the computationally intensive SWAT model,
support vector machine (SVM) was used as a surrogate to approximate the behavior of
SWAT. It was illustrated that combining SVM with Particle Swarm and Optimization
can save efforts for parameter calibration of SWAT. Further, SVM was used as a
surrogate to implement parameter uncertainty analysis fo SWAT. The results show that
SVM helped save more than 50% of runs of the computationally intensive SWAT model
The effect of model structure on the uncertainty estimation of streamflow simulation
was examined through applying SWAT and Neural Network models. The 95%
uncertainty intervals estimated by SWAT only include 20% of the observed data, while Neural Networks include more than 70%. This indicates the model structure is an
important source of uncertainty of hydrologic modeling and needs to be evaluated
carefully. Further exploitation of the effect of different treatments of the uncertainties of
model structures on hydrologic modeling was conducted through applying four types of
Bayesian Neural Networks. By considering uncertainty associated with model structure,
the Bayesian Neural Networks can provide more reasonable quantification of the
uncertainty of streamflow simulation. This study stresses the need for improving
understanding and quantifying methods of different uncertainty sources for effective
estimation of uncertainty of hydrologic simulation.
|
272 |
Measure-Driven Algorithm Design and Analysis: A New Approach for Solving NP-hard ProblemsLiu, Yang 2009 August 1900 (has links)
NP-hard problems have numerous applications in various fields such as networks,
computer systems, circuit design, etc. However, no efficient algorithms have
been found for NP-hard problems. It has been commonly believed that no efficient algorithms
for NP-hard problems exist, i.e., that P6=NP. Recently, it has been observed
that there are parameters much smaller than input sizes in many instances of NP-hard
problems in the real world. In the last twenty years, researchers have been interested
in developing efficient algorithms, i.e., fixed-parameter tractable algorithms, for those
instances with small parameters. Fixed-parameter tractable algorithms can practically
find exact solutions to problem instances with small parameters, though those
problems are considered intractable in traditional computational theory.
In this dissertation, we propose a new approach of algorithm design and analysis:
discovering better measures for problems. In particular we use two measures instead of
the traditional single measure?input size to design algorithms and analyze their time
complexity. For several classical NP-hard problems, we present improved algorithms
designed and analyzed with this new approach,
First we show that the new approach is extremely powerful for designing fixedparameter
tractable algorithms by presenting improved fixed-parameter tractable algorithms
for the 3D-matching and 3D-packing problems, the multiway cut problem, the feedback vertex set problems on both directed and undirected
graph and the max-leaf problems on both directed and undirected graphs. Most of
our algorithms are practical for problem instances with small parameters.
Moreover, we show that this new approach is also good for designing exact algorithms
(with no parameters) for NP-hard problems by presenting an improved exact
algorithm for the well-known satisfiability problem.
Our results demonstrate the power of this new approach to algorithm design and
analysis for NP-hard problems. In the end, we discuss possible future directions on
this new approach and other approaches to algorithm design and analysis.
|
273 |
Parameter Estimation of Dynamic Air-conditioning Component Models Using Limited Sensor DataHariharan, Natarajkumar 2010 May 1900 (has links)
This thesis presents an approach for identifying critical model parameters
in dynamic air-conditioning systems using limited sensor information. The expansion
valve model and the compressor model parameters play a crucial role in the system
model's accuracy. In the past, these parameters have been estimated using a mass flow
meter; however, this is an expensive devise and at times, impractical. In response to
these constraints, a novel method to estimate the unknown parameters of the expansion
valve model and the compressor model is developed. A gray box model obtained by
augmenting the expansion valve model, the evaporator model, and the compressor model
is used. Two numerical search algorithms, nonlinear least squares and Simplex search,
are used to estimate the parameters of the expansion valve model and the compressor
model. This parameter estimation is done by minimizing the error between the model
output and the experimental systems output. Results demonstrate that the nonlinear least
squares algorithm was more robust for this estimation problem than the Simplex search
algorithm.
In this thesis, two types of expansion valves, the Electronic Expansion Valve and
the Thermostatic Expansion Valve, are considered. The Electronic Expansion Valve
model is a static model due to its dynamics being much faster than the systems
dynamics; the Thermostatic expansion valve model, however, is a dynamic one. The
parameter estimation algorithm developed is validated on two different experimental
systems to confirm the practicality of its approach. Knowing the model parameters
accurately can lead to a better model for control and fault detection applications. In
addition to parameter estimation, this thesis also provides and validates a simple usable
mathematical model for the Thermostatic expansion valve.
|
274 |
Assessing Invariance of Factor Structures and Polytomous Item Response Model Parameter EstimatesReyes, Jennifer McGee 2010 December 1900 (has links)
The purpose of the present study was to examine the
invariance of the factor structure and item response model
parameter estimates obtained from a set of 27 items
selected from the 2002 and 2003 forms of Your First College
Year (YFCY). The first major research question of the
present study was: How similar/invariant are the factor
structures obtained from two datasets (i.e., identical
items, different people)? The first research question was
addressed in two parts: (1) Exploring factor structures
using the YFCY02 dataset; and (2) Assessing factorial
invariance using the YFCY02 and YFCY03 datasets.
After using exploratory and confirmatory and factor
analysis for ordered data, a four-factor model using 20
items was selected based on acceptable model fit for the YFCY02 and YFCY03 datasets. The four factors (constructs)
obtained from the final model were: Overall Satisfaction,
Social Agency, Social Self Concept, and Academic Skills.
To assess factorial invariance, partial and full factorial
invariance were examined. The four-factor model fit both
datasets equally well, meeting the criteria for partial and
full measurement invariance.
The second major research question of the present
study was: How similar/invariant are person and item
parameter estimates obtained from two different datasets
(i.e., identical items, different people) for the
homogenous graded response model (Samejima, 1969) and the
partial credit model (Masters, 1982)?
To evaluate measurement invariance using IRT methods,
the item discrimination and item difficulty parameters
obtained from the GRM need to be equivalent across
datasets. The YFCY02 and YFCY03 GRM item discrimination
parameters (slope) correlation was 0.828. The YFCY02 and
YFCY03 GRM item difficulty parameters (location)
correlation was 0.716. The correlations and scatter plots
indicated that the item discrimination parameter estimates
were more invariant than the item difficulty parameter
estimates across the YFCY02 and YFCY03 datasets.
|
275 |
Improved Rate Control for Low-Delay Communications in H.264/AVC Video Coding StandardWu, Sheng-Wang 17 August 2004 (has links)
In real-time, two way video communications, how to minimize the end-to-end delay for transmitting video data is very important. Since the delay produced by bits accumulated in the encoder buffer must be very small, we need an improved rate control to encode the video with high quality and maintain low buffer fullness. One approach to reduce the buffer fullness is to skip the encoding frames, but the frame-skipping will produce undesirable motion discontinuity in the encoded video sequence. In this thesis, we study the impact of low delay constraint in H.264 rate control and its improvements. The drawback of the H.264 rate control is it cannot handle the frame-skipping mechanism well. To modify this, we control the quantization parameter of each I-frame to avoid the buffer overflow and frame-skipping. Since encoding the I-frame by different quantization parameter will generate different rate and distortion for a group of pictures (GOP), we use Lagrangian optimization to find the tradeoff between rate and distortion for a GOP. By the estimation models of rate and distortion for a GOP, calculate the Lagrangian cost for each possible quantization parameter of I-frame, the quantization parameter with minimum Lagrangian cost will be our choice for I-frame. Simulation results show that our proposed rate control encode the video sequence with less skipped frames and with higher PSNR compared to H.264 rate control under low delay constraint.
|
276 |
Applications of GENESIS on Modeling Structure-Induced Shoreline ChangesHuang, Ya-Ling 27 June 2005 (has links)
Coastal erosion is, more than ever, a global problem. By adopting a high-efficient, cost effective and reliable numerical model, it would help predict and manage erosion, as well as alleviate many coastal problems. This thesis reports the results of a though out investigation on the popular one dimension long-term shoreline change model--- GENESIS, analyze its suitability, sensitivity and technical difficulties likely to encounter while using the model, with the aim to predict the effect of coastal structure on shoreline changes.
Prior to perform a modeling task, this report provides constructive recommendation on the setting of the length of shoreline to be covered in the modeling, boundary conditions, grid space, transport parameters K1 and K2 and revision of wave angle, followed by verification using results of several physical scale models, in order to enhance the reliability of the modeling and the parameters employed. Finally, reasonable ranges of K values are proposed. For modeling shoreline changes induced by a detached breakwater with normal incident waves, an empirical equation is proposed to determine the K ratio(K2/K1), which offer a useful guide in achieving the results with in a tolerance limits of 12%~-7%. When consider oblique wave incident to single detached breakwater, K1=0.6 is used and the ratio of K2/K1 ≈ 0.25~0.5. For modeling the effect of a single groin, the present study suggests K1=0.6 and K2/K1 ≈ 1~2. On the basis of these principles for setting the K values, the results are then applied to model the shoreline changes due to the installation of detached breakwater and groin.
From the results of this study, for normal wave incident to single detached breakwater, it shows that for a small ratio of the offshore distance to the length of the breakwater S/B or a larger wave height, the salient dimension will increase and wave period has almost no effect on the results produced; for small S/B ratio, the maximum downcoast retreat increase, and its quantity is almost not affected by the wave conditions imposed. For oblique wave incident to single detached breakwater, it shows that for a larger wave angle, a small S/B or a larger wave height, the salient dimension will increase and wave period has almost no effect on the results produced; for larger wave angle or small S/B ratio, the maximum downcoast retreat increase, and its quantity is almost not affected by the wave height and wave period. For modeling the effect of a single groin, it shows that for larger wave angle or length of groin, the maximum downcoast retreat increase, and its quantity is almost not affected by the wave height and wave period.
|
277 |
A Study of Process Parameter Optimization for BIC SteelTsai, Jeh-Hsin 06 February 2006 (has links)
Taguchi methods is also called quality engineering. It is a systematic methodology for product design(modify) and process design(improvement) with the most of saving cost and time, in order to satisfy customer requirement. Taguchi¡¦s parameter design is also known as robust design, which has the merits of low cost and high efficiency, and can achieve the activities of product quality design, management and improvement, consequently to reinforce the competitive ability of business. It is a worthy research course to study how to effectively apply parameter design, to shorten time spending on research, early to promote product having low cost and high quality on sale and to reinforce competitive advantage.
However, the parameter design optimization problems are difficult in practical application owing to (1)complex and nonlinear relationships exist among the system¡¦s inputs, outputs and parameters and (2)interactions may occur among parameters. (3)In Taguchi¡¦s two-phase optimization procedure, the adjustment factor cannot be guaranteed to exist in practice. (4)For some reasons, the data may become lost or were never available. For these incomplete data, the Taguchi¡¦s method cannot treat them well.
Neural networks have learning capacity fault tolerance and model-free characteristics. These characteristics support the neural networks as a competitive tool in processing multivariable input-output implementation. The successful field including diagnostics, robotics, scheduling, decision-marking, predicition, etc. In the process of searching optimization, genetic algorithm can avoid local optimization. So that it may enhance the possibility of global optimization.
This study had drawn out the key parameters from the spheroidizing theory, and L18, L9 orthogonal experimental array were applied to determine the optimal operation parameters by Signal/Noise analysis. The conclusions are summarized as follows:
1. The spheroidizing of AISI 3130 used to be the highest unqualified product, and required for the second annealing treatment. The operational record before improvement showed 83 tons of the 3130 steel were required for the second treatment. The optimal operation parameters had been defined by L18(61¡Ñ35) orthogonal experimental array. The control parameters of the annealing temperature was at B2
|
278 |
A Feasible Evaluation and Analysis of Visual Air Quality Index in Urban AreasChang, Kuo-chung 21 July 2006 (has links)
This research analyzed the weather information (temperature, wind velocity, visibility, and total cloudiness) from the Taipei and the Kaohsiung Weather Station of Central Weather Bureau, and air pollution from the Air Quality Monitoring Station of Environmental Protection Administration, Executive Yuan ( Shihlin, Shihlin, Jhongshan, Wanhua, Guting, Songshan¡A), Nanzih, Zuoying, Cianjin, and Siaogan ) to evaluate the feasibility of using visibility as the ambient air quality index by statistical analysis¡C
In regard to the visibility in Taipei metropolis, the visibility between 1983~1992 showed a steady status between 5~11 kilometers. The visibility after 1993 has increased gradually between 6~16 kilometers, which indicated that the visual air quality has been improved year by year in Taipei metropolis. In regard to the visibility in Kaohsiung metropolis, the index has a trend of decreasing year by year from 10~24 kilometers to 2~12 kilometers, and the decrease was particularly obvious after 1993.
Analyzing the air quality index greater than 100 in the metropolis, the visibility is categorized as the level of "poor", which means that the visibility is within 3 kilometers. When the air quality index ranges between 76~100, the visibility is categorized as the level of median, which means the visibility is within 4 kilometers. When the air quality index ranges between 50~75, the visibility is categorized as the level of "good", which means the visibility is within 7 kilometers. When the air quality index ranges between 20~49, the visibility is categorized as the level of "excellent", which means the visibility is beyond 7 kilometers.
|
279 |
Research on electrical performance of differential pair design in package substrateHuang, Chih-yi 18 July 2007 (has links)
Differential signaling is suitable for high speed signal transmission due to lower noise induction and higher common-mode noise rejection compared to its single-ended signaling counterpart. However, for a high performance differential transmission-line pair, excellent symmetry and appropriate design for substrate layer stack-up is necessary. Especially for a practical IC package substrate, differential transmission-line pair is inevitable for asymmetry because of considering the locations of IC pads and solderballs. Furthermore, different differential transmission-line pair architectures are also demanded in consideration of limited substrate floorplan space and substrate layer stack-up structures. In this thesis, several differential pairs have been implemented on the conventional 4-layer laminate package substrate. The consequent high frequency performances are measured using vector network analyzer and then compared by converting into mixed-mode S-parameters.
|
280 |
Saturated Reluctance Identification of high voltage Induction Motor and Estimation of Induction Motor/Generator EffectLee, Ching-Lin 10 June 2003 (has links)
Saturated reluctance identification of induction motor can be implemented by additional sensor, finite-element method, spectrum analysis, or step voltage test in the research accounts. But it is not easy to implement in the field evaluation when we need to build up the power system model, because of the factory parameter absent, expansive cost, extra sensor installation, or variable voltage and frequency.
To be concerned with practicality, it is always inconvenient to build up simulation for the end user. The linear model of motor can¡¦t provide the simulation answer accurately when models run into saturated during power system transient. Accordingly, this thesis discuss two paragraphs as following:
First, This thesis introduces a simple and practical method base on the manufacturer instruction manual to estimate the saturated reluctance of high / medium voltage induction motor in modeling. And we can analyze the motor dynamic characteristic by using the induction motor d-q-0 model directly, in place of traditional mathematical power equations.
Moreover, we can evaluate the motor-generation reaction because of the rotor inertia due to loss of voltage. To identify the discrepancy between numeral situations what the line capacitor existed or not. Besides, we can explain the existing voltage after power system break down by comparing the simulation result with recorder chart.
|
Page generated in 0.0474 seconds