• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 764
  • 229
  • 138
  • 95
  • 30
  • 29
  • 19
  • 16
  • 14
  • 10
  • 7
  • 5
  • 4
  • 4
  • 4
  • Tagged with
  • 1611
  • 591
  • 340
  • 247
  • 245
  • 235
  • 191
  • 187
  • 176
  • 167
  • 167
  • 160
  • 143
  • 135
  • 131
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
591

Robust command generations for nonlinear systems

Kozak, Kristopher C. 05 1900 (has links)
No description available.
592

Goodness-of-Fit Test Issues in Generalized Linear Mixed Models

Chen, Nai-Wei 2011 December 1900 (has links)
Linear mixed models and generalized linear mixed models are random-effects models widely applied to analyze clustered or hierarchical data. Generally, random effects are often assumed to be normally distributed in the context of mixed models. However, in the mixed-effects logistic model, the violation of the assumption of normally distributed random effects may result in inconsistency for estimates of some fixed effects and the variance component of random effects when the variance of the random-effects distribution is large. On the other hand, summary statistics used for assessing goodness of fit in the ordinary logistic regression models may not be directly applicable to the mixed-effects logistic models. In this dissertation, we present our investigations of two independent studies related to goodness-of-fit tests in generalized linear mixed models. First, we consider a semi-nonparametric density representation for the random effects distribution and provide a formal statistical test for testing normality of the random-effects distribution in the mixed-effects logistic models. We obtain estimates of parameters by using a non-likelihood-based estimation procedure. Additionally, we not only evaluate the type I error rate of the proposed test statistic through asymptotic results, but also carry out a bootstrap hypothesis testing procedure to control the inflation of the type I error rate and to study the power performance of the proposed test statistic. Further, the methodology is illustrated by revisiting a case study in mental health. Second, to improve assessment of the model fit in the mixed-effects logistic models, we apply the nonparametric local polynomial smoothed residuals over within-cluster continuous covariates to the unweighted sum of squares statistic for assessing the goodness-of-fit of the logistic multilevel models. We perform a simulation study to evaluate the type I error rate and the power performance for detecting a missing quadratic or interaction term of fixed effects using the kernel smoothed unweighted sum of squares statistic based on the local polynomial smoothed residuals over x-space. We also use a real data set in clinical trials to illustrate this application.
593

ROBUST STATISTICAL METHODS FOR NON-NORMAL QUALITY ASSURANCE DATA ANALYSIS IN TRANSPORTATION PROJECTS

Uddin, Mohammad Moin 01 January 2011 (has links)
The American Association of Highway and Transportation Officials (AASHTO) and Federal Highway Administration (FHWA) require the use of the statistically based quality assurance (QA) specifications for construction materials. As a result, many of the state highway agencies (SHAs) have implemented the use of a QA specification for highway construction. For these statistically based QA specifications, quality characteristics of most construction materials are assumed normally distributed, however, the normality assumption can be violated in several forms. Distribution of data can be skewed, kurtosis induced, or bimodal. If the process shows evidence of a significant departure from normality, then the quality measures calculated may be erroneous. In this research study, an extended QA data analysis model is proposed which will significantly improve the Type I error and power of the F-test and t-test, and remove bias estimates of Percent within Limit (PWL) based pay factor calculation. For the F-test, three alternative tests are proposed when sampling distribution is non-normal. These are: 1) Levene’s test; 2) Brown and Forsythe’s test; and 3) O’Brien’s test. One alternative method is proposed for the t-test, which is the non-parametric Wilcoxon - Mann – Whitney Sign Rank test. For PWL based pay factor calculation when lot data suffer non-normality, three schemes were investigated, which are: 1) simple transformation methods, 2) The Clements method, and 3) Modified Box-Cox transformation using “Golden Section Search” method. The Monte Carlo simulation study revealed that both Levene’s test and Brown and Forsythe’s test are robust alternative tests of variances when underlying sample population distribution is non-normal. Between the t-test and Wilcoxon test, the t-test was found significantly robust even when sample population distribution was severely non-normal. Among the data transformation for PWL based pay factor, the modified Box-Cox transformation using the golden section search method was found to be the most effective in minimizing or removing pay bias. Field QA data was analyzed to validate the model and a Microsoft® Excel macro based software is developed, which can adjust any pay consequences due to non-normality.
594

ROBUST GENERIC MODEL CONTROL FOR PARAMETER INTERVAL SYSTEMS

Istre, Joseph Michael 01 January 2004 (has links)
A multivariable control technique is proposed for a type of nonlinear system with parameter intervals. The control is based upon the feedback linearization scheme called Generic Model Control, and alters the control calculation by utilizing parameter intervals, employing an adaptive step, averaging control predictions, and applying an interval problem solution. The proposed approach is applied in controlling both a linear and a nonlinear arc welding system as well in other simulations of scalar and multivariable systems.
595

Empirical Likelihood Confidence Intervals for the Population Mean Based on Incomplete Data

Valdovinos Alvarez, Jose Manuel 09 May 2015 (has links)
The use of doubly robust estimators is a key for estimating the population mean response in the presence of incomplete data. Cao et al. (2009) proposed an alternative doubly robust estimator which exhibits strong performance compared to existing estimation methods. In this thesis, we apply the jackknife empirical likelihood, the jackknife empirical likelihood with nuisance parameters, the profile empirical likelihood, and an empirical likelihood method based on the influence function to make an inference for the population mean. We use these methods to construct confidence intervals for the population mean, and compare the coverage probabilities and interval lengths using both the ``usual'' doubly robust estimator and the alternative estimator proposed by Cao et al. (2009). An extensive simulation study is carried out to compare the different methods. Finally, the proposed methods are applied to two real data sets.
596

Multi - Timescale Control of Energy Storage Enabling the Integration of Variable Generation

Zhu, Dinghuan 01 May 2014 (has links)
A two-level optimal coordination control approach for energy storage and conventional generation consisting of advanced frequency control and stochastic optimal dispatch is proposed to deal with the real power balancing control problem introduced by variable renewable energy sources (RESs) in power systems. In the proposed approach, the power and energy constraints on energy storage are taken into account in addition to the traditional power system operational constraints such as generator output limits and power network constraints. The advanced frequency control level which is based on the robust control theory and the decentralized static output feedback design is responsibl e for the system frequency stabilization and restoration, whereas the stochastic optimal dispatch level which is based on the concept of stochastic model predictive control (SMPC) determines the optimal dispatch of generation resources and energy storage under uncertainties introduced by RESs as well as demand. In the advanced frequency control level, low-order decentralized robust frequency controllers for energy storage and conventional generation are simultaneously designed based on a state-space structure-preserving model of the power system and the optimal controller gains are solved via an improved linear matrix inequality algorithm. In the stochastic optimal dispatch level, various optimization decomposition techniques including both primal and dual decompositions together with two different decomposition schemes (i.e. scenario-based decomposition and temporal-based decomposition) are extensively investigated in terms of convergence speed due to the resulting large-scale and computationally demanding SMPC optimization problem. A two-stage mixed decomposition method is conceived to achieve the maximum speedup of the SMPC optimization solution process. The underlying control design philosophy across the entire work is the so-called time-scale matching principle, i.e. the conventional generators are mainly responsible to balance the low frequency components of the power variations whereas the energy storage devices because of their fast response capability are employed to alleviate the relatively high frequency components. The performance of the proposed approach is tested and evaluated by numerical simulations on both the WECC 9-bus system and the IEEE New England 39-bus system.
597

Regret-based Reward Elicitation for Markov Decision Processes

Kevin, Regan 22 August 2014 (has links)
Markov decision processes (MDPs) have proven to be a useful model for sequential decision- theoretic reasoning under uncertainty, yet they require the specification of a reward function that can require sophisticated human judgement to assess relevant tradeoffs. This dissertation casts the problem of specifying rewards as one of preference elicitation and aims to minimize the degree of precision with which a reward function must be specified while still allowing optimal or near-optimal policies to be produced. We demonstrate how robust policies can be computed for MDPs given only partial reward information using the minimax regret criterion. Minimax regret offers an intuitive bound on loss; however, it is computationally intractable in general. This work develops techniques for exploiting MDP structure to allow for offline precomputation that enables efficient online minimax regret computation. To complement this exact approach we develop several general approximations that offer both upper and lower bounds on minimax regret. We further show how approximations can be improved online during the elicitation procedure to balance accuracy and efficiency. To effectively reduce regret, we investigate a spectrum of elicitation approaches that range from the computationally-demanding optimal selection of complex queries about full MDP policies (which are informative, but, we believe, cognitively difficult) to the heuristic selection of simple queries that focus on a small set of reward parameters. Results are demonstrated on MDPs drawn from the domains of assistive technology and autonomic computing. Finally we demonstrate our framework on a realistic website optimization domain, per- forming elicitation on websites with tens of thousands of webpages. We show that minimax regret can be efficiently computed, and develop informative and cognitively reasonable queries that quickly lower minimax regret, producing policies that offer significant improvement in the design of the underlying websites.
598

Mechanism Design For Covering Problems

Minooei, Hadi January 2014 (has links)
Algorithmic mechanism design deals with efficiently-computable algorithmic constructions in the presence of strategic players who hold the inputs to the problem and may misreport their input if doing so benefits them. Algorithmic mechanism design finds applications in a variety of internet settings such as resource allocation, facility location and e-commerce, such as sponsored search auctions. There is an extensive amount of work in algorithmic mechanism design on packing problems such as single-item auctions, multi-unit auctions and combinatorial auctions. But, surprisingly, covering problems, also called procurement auctions, have almost been completely unexplored, especially in the multidimensional setting. In this thesis, we systematically investigate multidimensional covering mechanism- design problems, wherein there are m items that need to be covered and n players who provide covering objects, with each player i having a private cost for the covering objects he provides. A feasible solution to the covering problem is a collection of covering objects (obtained from the various players) that together cover all items. Two widely considered objectives in mechanism design are: (i) cost-minimization (CM) which aims to minimize the total cost incurred by the players and the mechanism designer; and (ii) payment minimization (PayM), which aims to minimize the payment to players. Covering mechanism design problems turn out to behave quite differently from packing mechanism design problems. In particular, various techniques utilized successfully for packing problems do not perform well for covering mechanism design problems, and this necessitates new approaches and solution concepts. In this thesis we devise various techniques for handling covering mechanism design problems, which yield a variety of results for both the CM and PayM objectives. In our investigation of the CM objective, we focus on two representative covering problems: uncapacitated facility location (UFL) and vertex cover. For multi-dimensional UFL, we give a black-box method to transform any Lagrangian-multiplier-preserving ??-approximation algorithm for UFL into a truthful-in-expectation, ??-approximation mechanism. This yields the first result for multi-dimensional UFL, namely a truthful-in-expectation 2-approximation mechanism. For multi-dimensional VCP (Multi-VCP), we develop a decomposition method that reduces the mechanism-design problem into the simpler task of constructing threshold mechanisms, which are a restricted class of truthful mechanisms, for simpler (in terms of graph structure or problem dimension) instances of Multi-VCP. By suitably designing the decomposition and the threshold mechanisms it uses as building blocks, we obtain truthful mechanisms with approximation ratios (n is the number of nodes): (1) O(r2 log n) for r-dimensional VCP; and (2) O(r log n) for r-dimensional VCP on any proper minor-closed family of graphs (which improves to O(log n) if no two neighbors of a node belong to the same player). These are the first truthful mechanisms for Multi-VCP with non-trivial approximation guarantees. For the PayM objective, we work in the oft-used Bayesian setting, where players??? types are drawn from an underlying distribution and may be correlated, and the goal is to minimize the expected total payment made by the mechanism. We consider the problem of designing incentive compatible, ex-post individually rational (IR) mechanisms for covering problems in the above model. The standard notion of incentive compatibility (IC) in such settings is Bayesian incentive compatibility (BIC), but this notion is over-reliant on having precise knowledge of the underlying distribution, which makes it a rather non- robust notion. We formulate a notion of IC that we call robust Bayesian IC (robust BIC) that is substantially more robust than BIC, and develop black-box reductions from robust BIC-mechanism design to algorithm design. This black-box reduction applies to single- dimensional settings even when we only have an LP-relative approximation algorithm for the algorithmic problem. We obtain near-optimal mechanisms for various covering settings including single- and multi-item procurement auctions, various single-dimensional covering problems, and multidimensional facility location problems. Finally, we study the notion of frugality, which considers the PayM objective but in a worst-case setting, where one does not have prior information about the players??? types. We show that some of our mechanisms developed for the CM objective are also good with respect to certain oft-used frugality benchmarks proposed in the literature. We also introduce an alternate benchmark for frugality, which more directly reflects the goal that the mechanism???s payment be close to the best possible payment, and obtain some preliminary results with respect to this benchmark.
599

Robust techniques for regression models with minimal assumptions / M.M. van der Westhuizen

Van der Westhuizen, Magdelena Marianna January 2011 (has links)
Good quality management decisions often rely on the evaluation and interpretation of data. One of the most popular ways to investigate possible relationships in a given data set is to follow a process of fitting models to the data. Regression models are often employed to assist with decision making. In addition to decision making, regression models can also be used for the optimization and prediction of data. The success of a regression model, however, relies heavily on assumptions made by the model builder. In addition, the model may also be influenced by the presence of outliers; a more robust model, which is not as easily affected by outliers, is necessary in making more accurate interpretations about the data. In this research study robust techniques for regression models with minimal assumptions are explored. Mathematical programming techniques such as linear programming, mixed integer linear programming, and piecewise linear regression are used to formulate a nonlinear regression model. Outlier detection and smoothing techniques are included to address the robustness of the model and to improve predictive accuracy. The performance of the model is tested by applying it to a variety of data sets and comparing the results to those of other models. The results of the empirical experiments are also presented in this study. / Thesis (M.Sc. (Computer Science))--North-West University, Potchefstroom Campus, 2011.
600

Comparison of two audio fingerprinting algorithms for advertisement identification / van Nieuwenhuizen H.A.

Van Nieuwenhuizen, Heinrich Abrie January 2011 (has links)
Although the identification of humans by fingerprints is a well–known technique in practice, the identification of an audio sample by means of a technique called audio fingerprinting is still under development. Audio fingerprinting can be used to identify different types of audio samples of which music and advertisements are the two most frequently encountered. Different audio fingerprinting techniques to identify audio samples appear seldom in the literature and direct comparisons of the techniques are not always available In this dissertation, the two audio fingerprinting techniques of Avery Wang and Haitsma and Kalker are compared in terms of accuracy, speed, versatility and scalability, with the goal of modifying the algorithms for optimal advertisement identification applications. To start the background of audio fingerprinting is summarised and different algorithms for audio fingerprinting are reviewed. Problems, issues to be addressed and research methodology are discussed. The research question is formulated as follows : “Can audio fingerprinting be applied successfully to advertisement monitoring, and if so, which existing audio fingerprinting algorithm is most suitable as a basis for a generic algorithm and how should the original algorithm be changed for this purpose?” The research question is followed by literature regarding the background of audio fingerprinting and different audio fingerprinting algorithms. Next, the importance of audio fingerprinting in the engineering field is motivated by the technical aspects related to audio fingerprinting. The technical aspects are not always necessary or part of the algorithm, but in most cases, the algorithms are pre–processed, filtered and downsampled. Other aspects include identifying unique features and storing them, on which each algorithm’s techniques differ. More detail on Haitsma and Kalker’s, Avery Wang’s and Microsoft’s RARE algorithms are then presented. Next, the desired interface for advertisement identification Graphical User Interface (GUI) is presented. Different solution architectures for advertisement identification are discussed. A design is presented and implemented which focuses on advertisement identification and helps with the validation process of the algorithm. The implementation is followed by the experimental setup and tests. Finally, the dissertation ends with results and comparisons, which verified and validated the algorithm and thus affirmed the first part of the research question. A short summary of the contribution made in the dissertation is given, followed by conclusions and recommendations for future work. / Thesis (M.Ing. (Computer and Electronical Engineering))--North-West University, Potchefstroom Campus, 2012.

Page generated in 0.0514 seconds