• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 764
  • 229
  • 138
  • 95
  • 30
  • 29
  • 19
  • 16
  • 14
  • 10
  • 7
  • 5
  • 4
  • 4
  • 4
  • Tagged with
  • 1611
  • 591
  • 340
  • 247
  • 245
  • 235
  • 191
  • 187
  • 176
  • 167
  • 167
  • 160
  • 143
  • 135
  • 131
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
251

Robust optimization and machine learning tools for adaptive transmission in wireless networks

Yun, Sung-Ho 01 February 2012 (has links)
Current and emerging wireless systems require adaptive transmissions to improve their throughput, to meet the QoS requirements or to maintain robust performance. However finding the optimal transmit parameters is getting more difficult due to the growing number of wireless devices that share the wireless medium and the increasing dimensions of transmit parameters, e.g., frequency, time and spatial domain. The performance of adaptive transmission policies derived from given measurements degrade when the environment changes. The policies need to either build up protection against those changes or tune themselves accordingly. Also, an adaptation for systems that take advantages of transmit diversity with finer granularity of resource allocation is hard to come up with due to the prohibitively large number of explicit and implicit environmental variables to take into account. The solutions to the simplified problems often fail due to incorrect assumptions and approximations. In this dissertation, we suggest two tools for adaptive transmission in changing complex environments. We show that adjustable robust optimization builds up protection upon the adaptive resource allocation in interference limited cellular broadband systems, yet maintains the flexibility to tune it according to temporally changing demand. Another tool we propose is based on a data driven approach called Support Vectors. We develop adaptive transmission policies to select the right set of transmit parameters in MIMO-OFDM wireless systems. While we don't explicitly consider all the related parameters, learning based algorithms implicitly take them all into account and result in the adaptation policies that fit optimally to the given environment. We extend the result to multicast traffic and show that the distributed algorithm combined with a data driven approach increases the system performance while keeping the required overhead for information exchange bounded. / text
252

Beyond white space : robust spectrum sensing and channel statistics based spectrum accessing strategies for cognitive radio network

Liu, Yingxi 31 October 2013 (has links)
Cognitive radio refers to the technology that the devices can intelligently access unused frequency resources which are originally reserved for legacy services in order to increase the spectrum utilization. At the mean time, the legacy services should not be affected by the access of cognitive radio devices. The common problems in cognitive radio are how to find unused frequency resources (spectrum sensing) and how to access them (spectrum accessing). This dissertation focuses on the robust methods of spectrum sensing as well as spectrum accessing strategies with the statistics of channel availabilities. The first part of the thesis studies non-parametric robust hypothesis testing problem to eliminate the effect of the uncertainty and instability introduced by non-stationary noise, which is constantly observed in communication systems. An empirical likelihood ratio test with density function constraints is proposed. This test outperforms many popular goodness-of-fit tests, including the robust Kolmogorov-Smirnov test and the Cramér-von Mises test, etc. Examples using spectrum sensing data with real-world noise samples are provided to show their performance. The second part focuses on channel idle time distribution based spectrum accessing strategies. Through the study of the real-world wireless local area network traffic, it is identified that the channel idle time distribution can be modeled using hyper-exponential distribution. With this model, the performance of a single cognitive radio, or the secondary user, is studied when the licensed user, or the primary user, does not react to interference. It is also shown that with the complete information of the hyper-exponential distribution, the secondary user can achieve a desirable performance. But when the model exhibits uncertainty and time non-stationarity, which would happen for any kind of wireless traffic, the secondary user suffers from huge performance loss. A strategy that is robust to the uncertainty is proposed. The performance of this strategy is demonstrated using experimental data. Another aspect of the problem is when the PU is reactive. In this case, a spectrum accessing strategy is devised to avoid large-duration interference to the PU. Additionally, the spectrum accessing strategies are also extended to the cognitive radio networks with multiple secondary users. A decentralized MAC protocol is devised which reaches a total secondary capacity performance close to the optimal. A discussion of the engineering aspects with practical consideration of spectrum sensing and accessing is given at the end. / text
253

Phase space planning for robust locomotion

Zhao, Ye, active 2013 25 November 2013 (has links)
Maneuvering through 3D structures nimbly is pivotal to the advancement of legged locomotion. However, few methods have been developed that can generate 3D gaits in those terrains and fewer if none can be generalized to control dynamic maneuvers. In this thesis, foot placement planning for dynamic locomotion traversing irregular terrains is explored in three dimensional space. Given boundary values of the center of mass' apexes during the gait, sagittal and lateral Phase Plane trajectories are predicted based on multi-contact and inverted pendulum dynamics. To deal with the nonlinear dynamics of the contact motions and their dimensionality, we plan a geometric surface of motion beforehand and rely on numerical integration to solve the models. In particular, we combine multi-contact and prismatic inverted pendulum models to resolve feet transitions between steps, allowing to produce trajectory patterns similar to those observed in human locomotion. Our contributions lay in the following points: (1) the introduction of non planar surfaces to characterize the center of mass' geometric behavior; (2) an automatic gait planner that simultaneously resolves sagittal and lateral feet placements; (3) the introduction of multi-contact dynamics to smoothly transition between steps in the rough terrains. Data driven methods are powerful approaches in absence of accurate models. These methods rely on experimental data for trajectory regression and prediction. Here, we use regression tools to plan dynamic locomotion in the Phase Space of the robot's center of mass and we develop nonlinear controllers to accomplish the desired plans with accuracy and robustness. In real robotic systems, sensor noise, simplified models and external disturbances contribute to dramatic deviations of the actual closed loop dynamics with respect to the desired ones. Moreover, coming up with dynamic locomotion plans for bipedal robots and in all terrains is an unsolved problem. To tackle these challenges we propose here two robust mechanisms: support vector regression for data driven model fitting and contact planning, and trajectory based sliding mode control for accuracy and robustness. First, support vector regression is utilized to learn the data set obtained through numerical simulations, providing an analytical solution to the nonlinear locomotion dynamics. To approximate typical Phase Plane behaviors that contain infinite slopes and loops, we propose to use implicit fitting functions for the regression. Compared to mainstream explicit fitting methods, our regression method has several key advantages: 1) it models high dimensional Phase Space states by a single unified implicit function; 2) it avoids trajectory over-fitting; 3) it guarantees robustness to noisy data. Finally, based on our regression models, we develop contact switching plans and robust controllers that guarantee convergence to the desired trajectories. Overall, our methods are more robust and capable of learning complex trajectories than traditional regression methods and can be easily utilized to develop trajectory based robust controllers for locomotion. Various case studies are analyzed to validate the effectiveness of our methods including single and multi step planning in a numerical simulation and swing foot trajectory control on our Hume bipedal robot. / text
254

A robust non-time series approach for valuation of weather derivativesand related products

Friedlander, Michael Arthur. January 2011 (has links)
published_or_final_version / Statistics and Actuarial Science / Doctoral / Doctor of Philosophy
255

Robust statistics based adaptive filtering algorithms for impulsive noise suppression

Zou, Yuexian, 鄒月嫻 January 2000 (has links)
(Uncorrected OCR) Abstract Abstract of thesis entitled Robust Statistics Based Adaptive Filtering Algorithms For Impulsive Noise Suppression Submitted by Yuexian Zou for the degree of Doctor of Philosophy at The University of Hong Kong in May 2000 The behavior of an adaptive filter is inherently decided by how its estimation error and the cost function are formulated under certain assumption of the involving signal statistics. This dissertation is concerned with the development of robust adaptive filtering in an impulsive noise environment based on the linear transversal filter (LTF) and the lattice-ladder filer (LLF) structures. Combining the linear adaptive filtering theory and robust statistics estimation techniques, two new cost functions, called the mean M -estimate error (MME) and the sum of weighted M -estimate error (SWME), are proposed. They can be taken as the generalizations of the well-known mean squared error (MSE) and the sum of weighted squares error (SWSE) cost functions when the involving signals are Gaussian. Based on the SWME cost function, the resulting optimal weight vector is governed by an M-estimate normal equation and a recursive least M -estimate (RLM) algorithm is derived. The RLM algorithm preserves the fast initial convergence, lower steady-state 11 Abstract derived. The RLM algorithm preserves the fast initial convergence, lower steady-state error and the robustness to the sudden system change of the recursive least squares (RLS) algorithm under Gaussian noise alone. Meanwhile, it has the ability to suppress impulse noise both in the desired and input signals. In addition, using the MME cost function, stochastic gradient based adaptive algorithms, named the least mean Mestimate (LMM) and its transform dOlnain version, the transform domain least mean Mestimate (TLMM) algorithms have been developed. The LMM and TLMM algorithms can be taken as the generalizations of the least-mean square (LMS) and transform domain normalized LMS (TLMS) algorithms, respectively. These two robust algorithms give similar performance as the LMS and TLMS algorithms under Gaussian noise alone and are able to suppress impulse noise appearing in the desired and input signals. It is noted that the performance and the computational complexity of the RLM, LMM and TLMM algorithms have a close relationship with the estimate of the threshold parameters for the M-estimate functions. A robust and effective recursive method has been suggested in this dissertation to estimate the variance of the estimation error and the required threshold parameters with certain confidence to suppress the impulsive noise. The mean and mean square convergence performances of the RLM and the LMM algorithms are evaluated, respectively, when the impulse noise is assumed to be contaminated Gaussian distribution. Motivated by the desirable features of the lattice-ladder filter, a new robust adaptive gradient lattice-ladder filtering algorithm is developed by minimizing an MME cost function together with an embedded robust impulse suppressing process, especially for impulses appearing in the filter input. The resultant robust gradient lattice-robust 111 Abstract normalized LMS (RGAL-RNLMS) algorithm perfonns comparably to the conventional GAL-NLMS algorithm under Gaussian noise alone; meanwhile, it has the capability of suppressing the adverse effects due to impulses in the input and the desired signals. The additional computational complexity compared to the GAL-NLMS algorithm is of O(Nw log Nw) + O(NfI log N,J . Extensive computer simulation studies are undertaken to evaluate the performance of the RLM, LMM, TLMM and the RGAL-RNLMS algorithms under the additive noise with either a contaminated Gaussian distribution or the symmetric alpha-stable (SaS ) distributions. The results substantiate the analysis and demonstrate the effectiveness and robustness of the developed robust adaptive filtering algorithms in suppressing impulsive noise both in the input and the desired signals of the adaptive filter. In conclusion, the proposed approaches in this dissertation present an attempt for developing robust adaptive filtering algorithms in impulsive noise environments and can be viewed as an extension of the linear adaptive filter theory. They may become reasonable and effective tools to solve adaptive filtering problems in a non-Gaussian environment in practice. IV / abstract / toc / Electrical and Electronic Engineering / Doctoral / Doctor of Philosophy
256

Essays on applications of majorization : robust inference, market demand elasticity, and constrained optimization

Ma, Jun January 2012 (has links)
No description available.
257

Robust Empirical Model-Based Algorithms for Nonlinear Processes

Diaz Mendoza, Juan Rosendo January 2010 (has links)
This research work proposes two robust empirical model-based predictive control algorithms for nonlinear processes. Chemical process are generally highly nonlinear thus predictive control algorithms that explicitly account for the nonlinearity of the process are expected to provide better closed-loop performance as compared to algorithms based on linear models. Two types of models can be considered for control: first-principles and empirical. Empirical models were chosen for the proposed algorithms for the following reasons: (i) they are less complex for on-line optimization, (ii) they are easy to identify from input-output data and (iii) their structure is suitable for the formulation of robustness tests. One of the key problems of every model that is used for prediction within a control strategy is that some model parameters cannot be known accurately due to measurement noise and/or error in the structure of the assumed model. In the robust control approach it is assumed that processes can be represented by models with parameters' values that are assumed to lie between a lower and upper bound or equivalently, that these parameters can be represented by a nominal value plus uncertainty. When this uncertainty in control parameters is not considered by the controller the control actions might be insufficient to effectively control the process and in some extreme cases the closed-loop may become unstable. Accordingly, the two robust control algorithms proposed in the current work explicitly account for the effect of uncertainty on stability and closed-loop performance. The first proposed controller is a robust gain-scheduling model predictive controller (MPC). In this case the process is represented within each operating region by a state-affine model obtained from input-output data. The state-affine model matrices are used to obtain a state-space based MPC for every operating region. By combining the state-affine, disturbance and controller equations a closed-loop representation was obtained. Then, the resulting mathematical representation was tested for robustness with linear matrix inequalities (LMI's) based on a test where the vertices of the parameter box were obtained by an iterative procedure. The result of the LMI's test gives a measure of performance referred to as γ that relates the effect of the disturbances on the process outputs. Finally, for the gain-scheduling part of the algorithm a set of rules was proposed to switch between the available controllers according to the current process conditions. Since every combination of the controller tuning parameters results in a different value of γ, an optimization problem was proposed to minimize γ with respect to the tuning parameters. Accordingly, for the proposed controller it was ensured that the effect of the disturbances on the output variables was kept to its minimum. A bioreactor case study was presented to show the benefits of the proposed algorithm. For comparison purposes a non-robust linear MPC was also designed. The results show that the proposed algorithm has a clear advantage in terms of performance as compared to non-robust linear MPC techniques. The second controller proposed in this work is a robust nonlinear model predictive controller (NMPC) based on an empirical Volterra series model. The benefit of using a Volterra series model for this case is that its structure can be split in two sections that account for the nominal and uncertain parameter values. Similar to the previously proposed gain-scheduled controller the model parameters were obtained from input-output data. After identifying the Volterra model, an interconnection matrix and its corresponding uncertainty description were found. The interconnection matrix relates the process inputs and outputs and is built according to the type of cost function that the controller uses. Based on the interconnection representing the system a robustness test was proposed based on a structured singular value norm calculation (SSV). The test is based on a min-max formulation where the worst possible closed-loop error is minimized with respect to the manipulated variables. Additional factors that were considered in the cost function were: manipulated variables weighting, manipulated variables restrictions and a terminal condition. To show the benefits of this controller two case studies were considered, a single-input-single-output (SISO) and a multiple-input-multiple-output (MIMO) process. Both case studies show that the proposed controller is able to control the process. The results showed that the controller could efficiently track set-points in the presence of disturbances while complying with the saturation limits imposed on the manipulated variables. This controller was also compared against a non-robust linear MPC, non-robust NMPC and non-robust first-principles NMPC. These comparisons were performed for different levels of uncertainty and for different values of the suppression or control actions weights. It was shown through these comparisons that a tradeoff exists between nominal performance and robustness to model error. Thus, for larger weights the controller is less aggressive resulting in more sluggish performance but less sensitivity to model error thus resulting in smaller differences between the robust and non-robust schemes. On the other hand when these weights are smaller the controller is more aggressive resulting in better performance at the nominal operating conditions but also leading to larger sensitivity to model error when the system is operated away from nominal conditions. In this case, as a result of this increased sensitivity to model error, the robust controller is found to be significantly better than the non-robust one.
258

Robust Distributed Model Predictive Control Strategies of Chemical Processes

Al-Gherwi, Walid January 2010 (has links)
This work focuses on the robustness issues related to distributed model predictive control (DMPC) strategies in the presence of model uncertainty. The robustness of DMPC with respect to model uncertainty has been identified by researchers as a key factor in the successful application of DMPC. A first task towards the formulation of robust DMPC strategy was to propose a new systematic methodology for the selection of a control structure in the context of DMPC. The methodology is based on the trade-off between performance and simplicity of structure (e.g., a centralized versus decentralized structure) and is formulated as a multi-objective mixed-integer nonlinear program (MINLP). The multi-objective function is composed of the contribution of two indices: 1) closed-loop performance index computed as an upper bound on the variability of the closed-loop system due to the effect on the output error of either set-point or disturbance input, and 2) a connectivity index used as a measure of the simplicity of the control structure. The parametric uncertainty in the models of the process is also considered in the methodology and it is described by a polytopic representation whereby the actual process’s states are assumed to evolve within a polytope whose vertices are defined by linear models that can be obtained from either linearizing a nonlinear model or from their identification in the neighborhood of different operating conditions. The system’s closed-loop performance and stability are formulated as Linear Matrix Inequalities (LMI) problems so that efficient interior-point methods can be exploited. To solve the MINLP a multi-start approach is adopted in which many starting points are generated in an attempt to obtain global optima. The efficiency of the proposed methodology is shown through its application to benchmark simulation examples. The simulation results are consistent with the conclusions obtained from the analysis. The proposed methodology can be applied at the design stage to select the best control configuration in the presence of model errors. A second goal accomplished in this research was the development of a novel online algorithm for robust DMPC that explicitly accounts for parametric uncertainty in the model. This algorithm requires the decomposition of the entire system’s model into N subsystems and the solution of N convex corresponding optimization problems in parallel. The objective of this parallel optimizations is to minimize an upper bound on a robust performance objective by using a time-varying state-feedback controller for each subsystem. Model uncertainty is explicitly considered through the use of polytopic description of the model. The algorithm employs an LMI approach, in which the solutions are convex and obtained in polynomial time. An observer is designed and embedded within each controller to perform state estimations and the stability of the observer integrated with the controller is tested online via LMI conditions. An iterative design method is also proposed for computing the observer gain. This algorithm has many practical advantages, the first of which is the fact that it can be implemented in real-time control applications and thus has the benefit of enabling the use of a decentralized structure while maintaining overall stability and improving the performance of the system. It has been shown that the proposed algorithm can achieve the theoretical performance of centralized control. Furthermore, the proposed algorithm can be formulated using a variety of objectives, such as Nash equilibrium, involving interacting processing units with local objective functions or fully decentralized control in the case of communication failure. Such cases are commonly encountered in the process industry. Simulations examples are considered to illustrate the application of the proposed method. Finally, a third goal was the formulation of a new algorithm to improve the online computational efficiency of DMPC algorithms. The closed-loop dual-mode paradigm was employed in order to perform most of the heavy computations offline using convex optimization to enlarge invariant sets thus rendering the iterative online solution more efficient. The solution requires the satisfaction of only relatively simple constraints and the solution of problems each involving a small number of decision variables. The algorithm requires solving N convex LMI problems in parallel when cooperative scheme is implemented. The option of using Nash scheme formulation is also available for this algorithm. A relaxation method was incorporated with the algorithm to satisfy initial feasibility by introducing slack variables that converge to zero quickly after a small number of early iterations. Simulation case studies have illustrated the applicability of this approach and have demonstrated that significant improvement can be achieved with respect to computation times. Extensions of the current work in the future should address issues of communication loss, delays and actuator failure and their impact on the robustness of DMPC algorithms. In addition, integration of the proposed DMPC algorithms with other layers in automation hierarchy can be an interesting topic for future work.
259

Multiobjective Optimization Algorithm Benchmarking and Design Under Parameter Uncertainty

LALONDE, NICOLAS 13 August 2009 (has links)
This research aims to improve our understanding of multiobjective optimization, by comparing the performance of five multiobjective optimization algorithms, and by proposing a new formulation to consider input uncertainty in multiobjective optimization problems. Four deterministic multiobjective optimization algorithms and one probabilistic algorithm were compared: the Weighted Sum, the Adaptive Weighted Sum, the Normal Constraint, the Normal Boundary Intersection methods, and the Nondominated Sorting Genetic Algorithm-II (NSGA-II). The algorithms were compared using six test problems, which included a wide range of optimization problem types (bounded vs. unbounded, constrained vs. unconstrained). Performance metrics used for quantitative comparison were the total run (CPU) time, number of function evaluations, variance in solution distribution, and numbers of dominated and non-optimal solutions. Graphical representations of the resulting Pareto fronts were also presented. No single method outperformed the others for all performance metrics, and the two different classes of algorithms were effective for different types of problems. NSGA-II did not effectively solve problems involving unbounded design variables or equality constraints. On the other hand, the deterministic algorithms could not solve a problem with a non-continuous objective function. In the second phase of this research, design under uncertainty was considered in multiobjective optimization. The effects of input uncertainty on a Pareto front were quantitatively investigated by developing a multiobjective robust optimization framework. Two possible effects on a Pareto front were identified: a shift away from the Utopia point, and a shrinking of the Pareto curve. A set of Pareto fronts were obtained in which the optimum solutions have different levels of insensitivity or robustness. Four test problems were used to examine the Pareto front change. Increasing the insensitivity requirement of the objective function with regard to input variations moved the Pareto front away from the Utopia point or reduced the length of the Pareto front. These changes were quantified, and the effects of changing robustness requirements were discussed. The approach would provide designers with not only the choice of optimal solutions on a Pareto front in traditional multiobjective optimization, but also an additional choice of a suitable Pareto front according to the acceptable level of performance variation. / Thesis (Master, Mechanical and Materials Engineering) -- Queen's University, 2009-08-10 21:59:13.795
260

Minimax Design for Approximate Straight Line Regression

Daemi, Maryam Unknown Date
No description available.

Page generated in 0.072 seconds