Spelling suggestions: "subject:"aximum"" "subject:"amaximum""
181 |
Supply Current Modeling and Analysis of Deep Sub-Micron Cmos CircuitsAhmad, Tariq B 01 January 2008 (has links) (PDF)
Continued technology scaling has introduced many new challenges in VLSI design. Instantaneous switching of the gates yields high current flow through them that causes large voltage drop at the supply lines. Such high instantaneous currents and voltage drop cause reliability and performance degradation. Reliability is an issue as high magnitude of current can cause electromigration, whereas, voltage drop can slow down the circuit performance. Therefore, designing power supply lines emphasizes the need of computing maximum current through them. However, the development of digital integrated circuits in short design cycle requires accurate and fast timing and power simulation. Unfortunately, simulators that employ device modeling methods, such as HSPICE are prohibitively slow for large designs. Therefore, methods which can produce good maximum current estimates in short times are critical. In this work a compact model has been developed for maximum current estimation that speeds up the computation by orders of magnitude over the commercial tools.
|
182 |
Logspline Density Estimation with an Application to the Study of Survival Data of Lung Cancer Patients.Chen, Yong 18 August 2004 (has links) (PDF)
A Logspline method of estimating an unknown density function f based on sample data is studied. Our approach is to use maximum likelihood estimation to estimate the unknown density function from a space of linear splines that have a finite number of fixed uniform knots. In the end of this thesis, the method is applied to a real survival data set of lung cancer patients.
|
183 |
Food Shelf Life: Estimation and Experimental DesignLarsen, Ross Allen Andrew 15 May 2006 (has links) (PDF)
Shelf life is a parameter of the lifetime distribution of a food product, usually the time until a specified proportion (1-50%) of the product has spoiled according to taste. The data used to estimate shelf life typically come from a planned experiment with sampled food items observed at specified times. The observation times are usually selected adaptively using ‘staggered sampling.’ Ad-hoc methods based on linear regression have been recommended to estimate shelf life. However, other methods based on maximizing a likelihood (MLE) have been proposed, studied, and used. Both methods assume the Weibull distribution. The observed lifetimes in shelf life studies are censored, a fact that the ad-hoc methods largely ignore. One purpose of this project is to compare the statistical properties of the ad-hoc estimators and the maximum likelihood estimator. The simulation study showed that the MLE methods have higher coverage than the regression methods, better asymptotic properties in regards to bias, and have lower median squared errors (mese) values, especially when shelf life is defined by smaller percentiles. Thus, they should be used in practice. A genetic algorithm (Hamada et al. 2001) was used to find near-optimal sampling designs. This was successfully programmed for general shelf life estimation. The genetic algorithm generally produced designs that had much smaller median squared errors than the staggered design that is used commonly in practice. These designs were radically different than the standard designs. Thus, the genetic algorithm may be used to plan studies in the future that have good estimation properties.
|
184 |
Parameter Estimation for the Beta DistributionOwen, Claire Elayne Bangerter 20 November 2008 (has links) (PDF)
The beta distribution is useful in modeling continuous random variables that lie between 0 and 1, such as proportions and percentages. The beta distribution takes on many different shapes and may be described by two shape parameters, alpha and beta, that can be difficult to estimate. Maximum likelihood and method of moments estimation are possible, though method of moments is much more straightforward. We examine both of these methods here, and compare them to three more proposed methods of parameter estimation: 1) a method used in the Program Evaluation and Review Technique (PERT), 2) a modification of the two-sided power distribution (TSP), and 3) a quantile estimator based on the first and third quartiles of the beta distribution. We find the quantile estimator performs as well as maximum likelihood and method of moments estimators for most beta distributions. The PERT and TSP estimators do well for a smaller subset of beta distributions, though they never outperform the maximum likelihood, method of moments, or quantile estimators. We apply these estimation techniques to two data sets to see how well they approximate real data from Major League Baseball (batting averages) and the U.S. Department of Energy (radiation exposure). We find the maximum likelihood, method of moments, and quantile estimators perform well with batting averages (sample size 160), and the method of moments and quantile estimators perform well with radiation exposure proportions (sample size 20). Maximum likelihood estimators would likely do fine with such a small sample size were it not for the iterative method needed to solve for alpha and beta, which is quite sensitive to starting values. The PERT and TSP estimators do more poorly in both situations. We conclude that in addition to maximum likelihood and method of moments estimation, our method of quantile estimation is efficient and accurate in estimating parameters of the beta distribution.
|
185 |
Parameter Estimation for the Lognormal DistributionGinos, Brenda Faith 13 November 2009 (has links) (PDF)
The lognormal distribution is useful in modeling continuous random variables which are greater than or equal to zero. Example scenarios in which the lognormal distribution is used include, among many others: in medicine, latent periods of infectious diseases; in environmental science, the distribution of particles, chemicals, and organisms in the environment; in linguistics, the number of letters per word and the number of words per sentence; and in economics, age of marriage, farm size, and income. The lognormal distribution is also useful in modeling data which would be considered normally distributed except for the fact that it may be more or less skewed (Limpert, Stahel, and Abbt 2001). Appropriately estimating the parameters of the lognormal distribution is vital for the study of these and other subjects. Depending on the values of its parameters, the lognormal distribution takes on various shapes, including a bell-curve similar to the normal distribution. This paper contains a simulation study concerning the effectiveness of various estimators for the parameters of the lognormal distribution. A comparison is made between such parameter estimators as Maximum Likelihood estimators, Method of Moments estimators, estimators by Serfling (2002), as well as estimators by Finney (1941). A simulation is conducted to determine which parameter estimators work better in various parameter combinations and sample sizes of the lognormal distribution. We find that the Maximum Likelihood and Finney estimators perform the best overall, with a preference given to Maximum Likelihood over the Finney estimators because of its vast simplicity. The Method of Moments estimators seem to perform best when σ is less than or equal to one, and the Serfling estimators are quite accurate in estimating μ but not σ in all regions studied. Finally, these parameter estimators are applied to a data set counting the number of words in each sentence for various documents, following which a review of each estimator's performance is conducted. Again, we find that the Maximum Likelihood estimators perform best for the given application, but that Serfling's estimators are preferred when outliers are present.
|
186 |
Application of Convex Methods to Identification of Fuzzy SubpopulationsEliason, Ryan Lee 10 September 2010 (has links) (PDF)
In large observational studies, data are often highly multivariate with many discrete and continuous variables measured on each observational unit. One often derives subpopulations to facilitate analysis. Traditional approaches suggest modeling such subpopulations with a compilation of interaction effects. However, when many interaction effects define each subpopulation, it becomes easier to model membership in a subpopulation rather than numerous interactions. In many cases, subjects are not complete members of a subpopulation but rather partial members of multiple subpopulations. Grade of Membership scores preserve the integrity of this partial membership. By generalizing an analytic chemistry concept related to chromatography-mass spectrometry, we obtain a method that can identify latent subpopulations and corresponding Grade of Membership scores for each observational unit.
|
187 |
The Construct Validity Of A Situational Judgment Test In A Maximum Performance ContextStagl, Kevin 01 January 2006 (has links)
A Predictor Response Process model (see Ployhart, 2006) and research findings were leveraged to formulate research questions about, and generate construct validity evidence for, a new situational judgment test (SJT) designed to measure declarative and strategic knowledge. The first question asked if SJT response instructions (i.e., 'Should Do', 'Would Do') moderated the validity of an SJT in a maximum performance context. The second question asked what the upper-bound criterion-related validity coefficient is for SJTs in talent selection contexts in which typical performance is the criterion of interest. The third question asked whether the SJT used in the present study was fair for gender and ethnic-based subgroups according to Cleary's (1968) definition of test fairness. Participants were randomly assigned to complete an SJT with either 'Should Do' or 'Would Do' response instructions and their maximum decision making performance outcomes were captured during a moderate fidelity poker simulation. The findings of this study suggested knowledge, as measured by the SJT, interacted with response instructions when predicting aggregate and average performance outcomes such that the 'Should Do' SJT had stronger criterion-related validity coefficients than the 'Would Do' version. The findings also suggested the uncorrected upper-bound criterion-related validity coefficient for SJTs in selection contexts is at least moderate to strong ([beta] = .478). Moreover, the SJT was fair according to Cleary's definition of test fairness. The implications of these findings are discussed.
|
188 |
Diagnostics after a Signal from Control Charts in a Normal ProcessLou, Jianying 03 October 2008 (has links)
Control charts are fundamental SPC tools for process monitoring. When a control chart or combination of charts signals, knowing the change point, which distributional parameter changed, and/or the change size helps to identify the cause of the change, remove it from the process or adjust the process back in control correctly and immediately. In this study, we proposed using maximum likelihood (ML) estimation of the current process parameters and their ML confidence intervals after a signal to identify and estimate the changed parameters. The performance of this ML diagnostic procedure is evaluated for several different charts or chart combinations for the cases of sample sizes and , and compared to the traditional approaches to diagnostics. None of the ML and the traditional estimators performs well for all patterns of shifts, but the ML estimator has the best overall performance. The ML confidence interval diagnostics are overall better at determining which parameter has shifted than the traditional diagnostics based on which chart signals. The performance of the generalized likelihood ratio (GLR) chart in shift detection and in ML diagnostics is comparable to the best EWMA chart combination. With the application of the ML diagnostics naturally following a GLR chart compared to the traditional control charts, the studies of a GLR chart during process monitoring can be further deepened in the future. / Ph. D.
|
189 |
Influence of Column-Base Fixity On Lateral Drift of Gable FramesVerma, Amber 29 May 2012 (has links)
In a typical light metal building, the structural members are designed for the forces and moments obtained from the wind drift analysis, which assumes pinned connections at the base. The pinned connections provide no moment at the base and have zero rotational stiffness. However, in reality every connection provides some restraint and has some rotational stiffness. Hence, by considering a modeling assumption of pinned condition, the actual behavior of the connection is not captured and this results in overestimation of lateral drifts and appearance of larger moments at the knee of the gable frames. Since the structural components are designed on the basis of these highly conservative results, the cost of the project increases. This thesis investigates the real behavior of the column base connection and tries to reduce the above stated conservatism by developing a computer program or "wizard" to calculate the initial rotational stiffness of any column base connection.
To observe the actual behavior of a column base connection under different load cases, a number of finite element models were created in SAP2000. Each finite element model of the column base connection contained base plate, column stub, anchor bolts and in some cases grout as its components. The model was mainly subjected to three load cases, namely gravity, wind and gravity plus wind. After performing many analyses, the influence of flexibility of each component on the flexibility of the connection was observed and a list of parameters was created. These parameters are the properties of above mentioned components which characterizes any column base connection. These parameters were then used as inputs to model any configuration of the column base connection in the developed wizard. The wizard uses OpenSees and SAP2000 to analyze the modeled configuration of the connection and provides values of the initial rotational stiffness and maximum bearing pressure for the provided loads. These values can be further used in any structural analysis which is done to calculate the lateral drift of a frame under lateral loads. This will also help in getting results which are less conservative than the results which one gets on assuming pinned condition at the base. / Master of Science
|
190 |
Maximum Power Point Tracking Using Kalman Filter for Photovoltaic SystemKang, Byung O. 20 January 2011 (has links)
This thesis proposes a new maximum power point tracking (MPPT) method for photovoltaic (PV) systems using Kalman filter. The Perturbation & Observation (P&O) method is widely used due to its easy implementation and simplicity. The P&O usually requires a dithering scheme to reduce noise effects, but the dithering scheme slows the tracking response time. Tracking speed is the most important factor for improving efficiency under frequent environmental change.
The proposed method is based on the Kalman filter. An adaptive MPPT algorithm which uses an instantaneous power slope has introduced, but process and sensor noises disturb its estimations. Thus, applying the Kalman filter to the adaptive algorithm is able to reduce tracking failures by the noises. It also keeps fast tracking performance of the adaptive algorithm, so that enables using the Kalman filter to generate more powers under rapid weather changes than using the P&O.
For simulations, a PV system is introduced with a 30kW array and MPPT controller designs using the Kalman filter and P&O. Simulation results are provided the comparison of the proposed method and the P&O on transient response for sudden system restart and irradiation changes in different noise levels. The simulations are also performed using real irradiance data for two entire days, one day is smooth irradiance changes and the other day is severe irradiance changes. The proposed method has showed the better performance when the irradiance is severely fluctuating than the P&O while the two methods have showed the similar performances on the smooth irradiance changes. / Master of Science
|
Page generated in 0.7233 seconds