• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 768
  • 229
  • 138
  • 95
  • 30
  • 29
  • 19
  • 16
  • 14
  • 10
  • 7
  • 5
  • 4
  • 4
  • 4
  • Tagged with
  • 1619
  • 591
  • 344
  • 248
  • 246
  • 235
  • 192
  • 188
  • 178
  • 170
  • 168
  • 161
  • 143
  • 135
  • 132
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
361

A Riemannian Distance For Robust Downlink Beamforming

Xu, Lijin 10 1900 (has links)
<p>We examine the robust downlink beamforming design from the point of outage probability constraint. We further reason that since the estimated downlink channel correlation (DCC) matrices form a manifold in the signal space, the estimation error should be measured in terms of Riemannian distance (RD) instead of the commonly used Euclidean distance (ED). Applying this concept of measure to our design constraint, we establish approximated outage probability constraints using multidimensional ball set and multidimensional cube set. We transform the design problem into a convex optimization problem which can be solved efficiently by standard methods. Our proposed methods apply to both Gaussian distribution assumption and uniform distribution assumption. Simulation results show that the performance of our design is superior to those of other robust beamformers recently developed.</p> / Master of Applied Science (MASc)
362

Robust Power Loading for the TDD MISO Downlink with Outage Constraints

Sohrabi, Foad 10 1900 (has links)
<p>We consider the problem of power allocation for the single-cell multiple-input single- output (MISO) downlink in a time division duplex (TDD) system. In such systems, the base station (BS) acquires information about the channel state during the training component of the uplink phase. The resulting estimation errors are modeled prob- abilistically, and the receivers specify quality-of-service (QoS) constraints in terms of a target signal-to-interference-and-noise ratio that is to be achieved with a given outage probability. For a fixed beamforming structure, we seek a power allocation that minimizes the transmission power required to satisfy the users’ QoS requests.</p> <p>The proposed approach to that problem begins with the observation that for TDD systems the channel estimation error at the base station can be modeled as being additive and Gaussian. Under that model, we obtain a precise deterministic characterization of the outage probability, and mildly conservative approximations thereof. Although the resulting deterministic optimization problems are not convex, we have been able to obtain good solutions using straightforward coordinate update algorithms. In fact, these solutions provide significantly better performance than the existing approaches, which are based on convex restrictions, because the proposed approximations are less conservative. By developing some approximations of the precise deterministic characterization of the outage probability, we develop algorithms that have good performance and much lower computational cost.</p> / Master of Applied Science (MASc)
363

CONFIDENCE REGIONS FOR OPTIMAL CONTROLLABLE VARIABLES FOR THE ROBUST PARAMETER DESIGN PROBLEM

Cheng, Aili January 2012 (has links)
In robust parameter design it is often possible to set the levels of the controllable factors to produce a zero gradient for the transmission of variability from the noise variables. If the number of control variables is greater than the number of noise variables, a continuum of zero-gradient solutions exists. This situation is useful as it provides the experimenter with multiple conditions under which to configure a zero gradient for noise variable transmission. However, this situation requires a confidence region for the multiple-solution factor levels that provides proper simultaneous coverage. This requirement has not been previously recognized in the literature. In the case where the number of control variables is greater than the number of noise variables, we show how to construct critical values needed to maintain the simultaneous coverage rate. Two examples are provided as a demonstration of the practical need to adjust the critical values for simultaneous coverage. The zero-gradient confidence region only focuses on the variance, and there are in fact many such situations in which focus is or could be placed entirely on the process variance. In the situation where both mean and variance need to be considered, a general confidence region in control variables is developed by minimizing weighted mean square error. This general method is applicable to many situations including mixture experiments which have an inherit constraint on the control factors. It also gives the user the flexibility to put different weights on the mean and variance parts for simultaneous optimization. It turns out that the same computational algorithm can be used to compute the dual confidence region in both control factors and the response variable. / Statistics
364

Robust Sequential View Planning for Object Recognition Using Multiple Cameras

Farshidi, Forough 07 1900 (has links)
<p> In this thesis the problem of object recognition/pose estimation using active sensing is investigated. It is assumed that multiple cameras acquire images from different view angles of an object belonging to a set of a priori known objects. The eigenspace method is used to process the sensory observations and produce an abstract measurement vector. This step is necessary to avoid the manipulation of the original sensor data, i.e. large images, that can render the sensor modelling and matching process practically infeasible.</p> <p> The eigenspace representation is known to have shortcomings in dealing with structured noise such as occlusion. To overcome this problem, models of occlusions and sensor noise have been incorporated into the probabilistic model of sensor/object to increase robustness with respect to such uncertainties. The active recognition algorithm has also been modified to consider the possibility of occlusion, as well as variation in the occlusion levels due to camera movements.</p> <p> A recursive Bayesian state estimation problem is formulated to model the observation uncertainties through a probabilistic scheme. This enables us to identify the object and estimate its pose by fusing the information obtained from individual cameras. To this end, an extensive training step is performed, providing the system with the sensor model required for the Bayesian estimation. In order to enhance the quality of the estimates and to reduce the number of images taken, we employ active real-time viewpoint planning strategies to position cameras. For that purpose, the positions of cameras are controlled based on two different statistical performance criteria, namely the Mutual Information (MI) and Cramér-Rao Lower Bound (CRLB).</p> <p> A multi-camera active vision system has been developed in order to implement the ideas proposed in this thesis. Comparative Monte Carlo experiments conducted with the two-camera system demonstrate the effectiveness of the proposed methods in object classification/pose estimation in the presence of structured noise. Different concepts introduced in this work, i.e., the multi-camera data fusion, the occlusion modelling, and the active camera movement, all improve the recognition process significantly. Specifically, these approaches all increase the recognition rate, decrease the number of steps taken before recognition is completed, and enhance robustness with respect to partial occlusion considerably.</p> / Thesis / Master of Applied Science (MASc)
365

Robust Inventory Management under Supply and Demand Uncertainties

Chu, Jie January 2018 (has links)
In this thesis, we study three periodic-review, finite-horizon inventory systems in the presence of supply and demand uncertainties. In the first part of the thesis, we study a multi-period single-station problem in which supply uncertainty is modeled by partial supply. Formulating the problem under a robust optimization (RO) framework, we show that solving the robust counterpart is equivalent to solving a nominal problem with a modified deterministic demand sequence. In particular, in the stationary case the optimal robust policy follows the quasi-(s, S) form and the corresponding s and S levels are theoretically computable. In the second part of the thesis, we extend the RO framework to a multi-period multi-echelon problem. We show that for a tree structure network, decomposition applies so that the optimal single-station robust policy remains valid for each echelon in the tree. Furthermore, if there are no setup costs in the network, then the problem can be decomposed into several uncapacitated single-station problems with new cost parameters subject to the deterministic demands. In the last part of the thesis, we consider a periodic-review Assemble-To-Order (ATO) system with multiple components and multiple products, where the inventory replenishment for each component follows an independent base-stock policy and product demands are satisfied according to a First-Come-First-Served (FCFS) rule. We jointly consider the inventory replenishment and component allocation problems in the ATO system under stochastic component replenishment lead times and stochastic product demands. The problems are formulated under the stochastic programming (SP) framework, which are difficult to solve exactly due to a large number of scenarios. We use the sample average approximation (SAA) algorithms to find near-optimal solutions, which accuracy is verified by the numerical experiment results. / Thesis / Doctor of Philosophy (PhD)
366

New robust and fragile watermarking scheme for colour images captured by mobile phone cameras

Jassim, Taha Dawood, Abd-Alhameed, Raed, Al-Ahmad, Hussain January 2013 (has links)
No / This paper examines and evaluates a new robust and fragile watermarking scheme for colour images captured by mobile phone cameras. The authentication has been checked by using the fragile watermarking, while the copyright protection has been examined by using the robust one. The mobile phone number, including the international code, is a unique number across the whole world and it is used as a robust watermark. The number is embedded in the frequency domain using the discrete wavelet transform. On the other hand, hash codes are used as fragile watermarks and inserted in the spatial domain of the RGB image. The scheme is blind and the extraction process of the watermarks (Robust and Fragile) does not require the original image. The fragile watermark can detect any tampering in the image while the robust watermark is strong enough to survive against several attacks. The watermarking algorithm causes minimal distortion to the images. The proposed algorithm has been successfully tested, evaluated and compared with other algorithms.
367

Robust Adaptive Signal Processors

Picciolo, Michael L. 21 April 2003 (has links)
Standard open loop linear adaptive signal processing algorithms derived from the least squares minimization criterion require estimates of the N-dimensional input interference and noise statistics. Often, estimated statistics are biased by contaminant data (such as outliers and non-stationary data) that do not fit the dominant distribution, which is often modeled as Gaussian. In particular, convergence of sample covariance matrices used in block processed adaptive algorithms, such as the Sample Matrix Inversion (SMI) algorithm, are known to be affected significantly by outliers, causing undue bias in subsequent adaptive weight vectors. The convergence measure of effectiveness (MOE) of the benchmark SMI algorithm is known to be relatively fast (order K = 2N training samples) and independent of the (effective) rank of the external interference covariance matrix, making it a useful method in practice for non-contaminated data environments. Novel robust adaptive algorithms are introduced here that perform superior to SMI algorithms in contaminated data environments while some retain its valuable convergence independence feature. Convergence performance is shown to be commensurate with SMI in non-contaminated environments as well. The robust algorithms are based on the Gram Schmidt Cascaded Canceller (GSCC) structure where novel building block algorithms are derived for it and analyzed using the theory of Robust Statistics. Coined M – cancellers after M – estimates of Huber, these novel cascaded cancellers combine robustness and statistical estimation efficiency in order to provide good adaptive performance in both contaminated and non-contaminated data environments. Additionally, a hybrid processor is derived by combining the Multistage Wiener Filter (MWF) and Median Cascaded Canceller (MCC) algorithms. Both simulated data and measured Space-Time Adaptive Processing (STAP) airborne radar data are used to show performance enhancements. The STAP application area is described in detail in order to further motivate research into robust adaptive processing. / Ph. D.
368

A Polynomial Chaos Approach to Control Design

Templeton, Brian Andrew 11 September 2009 (has links)
A method utilizing H2 control concepts and the numerical method of Polynomial Chaos was developed in order to create a novel robust probabilistically optimal control approach. This method was created for the practical reason that uncertainty in parameters tends to be inherent in system models. As such, the development of new methods utilizing probability density functions (PDFs) was desired. From a more theoretical viewpoint, the utilization of Polynomial Chaos for studying and designing control systems has not been very thoroughly investigated. The current work looks at expanding the H2 and related Linear Quadratic Regulator (LQR) control problems for systems with parametric uncertainty. This allows solving deterministic linear equations that represent probabilistic linear differential equations. The application of common LTI (Linear Time Invariant) tools to these expanded systems are theoretically justified and investigated. Examples demonstrating the utilized optimization process for minimizing the H2 norm and parallels to LQR design are presented. The dissertation begins with a thorough background section that reviews necessary probability theory. Also, the connection between Polynomial Chaos and dynamic systems is explained. Next, an overview of related control methods, as well as an in-depth review of current Polynomial Chaos literature is given. Following, formal analysis, related to the use of Polynomial Chaos, is provided. This lays the ground for the general method of control design using Polynomial Chaos and H2. Then an experimental section is included that demonstrates controller synthesis for a constructed probabilistic system. The experimental results lend support to the method. / Ph. D.
369

Contributions to Robust Adaptive Signal Processing with Application to Space-Time Adaptive Radar

Schoenig, Gregory Neumann 04 May 2007 (has links)
Classical adaptive signal processors typically utilize assumptions in their derivation. The presence of adequate Gaussian and independent and identically distributed (i.i.d.) input data are central among such assumptions. However, classical processors have a tendency to suffer a degradation in performance when assumptions like these are violated. Worse yet, such degradation is not guaranteed to be proportional to the level of deviation from the assumptions. This dissertation proposes new signal processing algorithms based on aspects of modern robustness theory, including methods to enable adaptivity of presently non-adaptive robust approaches. The contributions presented are the result of research performed jointly in two disciplines, namely robustness theory and adaptive signal processing. This joint consideration of robustness and adaptivity enables improved performance in assumption-violating scenarios—scenarios in which classical adaptive signal processors fail. Three contributions are central to this dissertation. First, a new adaptive diagnostic tool for high-dimension data is developed and shown robust in problematic contamination. Second, a robust data-pre-whitening method is presented based on the new diagnostic tool. Finally, a new suppression-based robust estimator is developed for use with complex-valued adaptive signal processing data. To exercise the proposals and compare their performance to state- of-the art methods, data sets commonly used in statistics as well as Space-Time Adaptive Processing (STAP) radar data, both real and simulated, are processed, and performance is subsequently computed and displayed. The new algorithms are shown to outperform their state-of-the-art counterparts from both a signal-to-interference plus noise ratio (SINR) convergence rate and target detection perspective. / Ph. D.
370

Automated Detection of Surface Defects on Barked Hardwood Logs and Stems Using 3-D Laser Scanned Data

Thomas, Liya 15 November 2006 (has links)
This dissertation presents an automated detection algorithm that identifies severe external defects on the surfaces of barked hardwood logs and stems. The defects detected are at least 0.5 inch in height and at least 3 inches in diameter, which are severe, medium to large in size, and have external surface rises. Hundreds of real log defect samples were measured, photographed, and categorized to summarize the main defect features and to build a defect knowledge base. Three-dimensional laser-scanned range data capture the external log shapes and portray bark pattern, defective knobs, and depressions. The log data are extremely noisy, have missing data, and include severe outliers induced by loose bark that dangles from the log trunk. Because the circle model is nonlinear and presents both additive and non-additive errors, a new robust generalized M-estimator has been developed that is different from the ones proposed in the statistical literature for linear regression. Circle fitting is performed by standardizing the residuals via scale estimates calculated by means of projection statistics and incorporated in the Huber objective function to bound the influence of the outliers in the estimates. The projection statistics are based on 2-D radial-vector coordinates instead of the row vectors of the Jacobian matrix as proposed in the statistical literature dealing with linear regression. This approach proves effective in that it makes the GM-estimator to be influence bounded and thereby, robust against outliers. Severe defects are identified through the analysis of 3-D log data using decision rules obtained from analyzing the knowledge base. Contour curves are generated from radial distances, which are determined by robust 2-D circle fitting to the log-data cross sections. The algorithm detected 63 from a total of 68 severe defects. There were 10 non-defective regions falsely identified as defects. When these were calculated as areas, the algorithm locates 97.6% of the defect area, and falsely identifies 1.5% of the total clear area as defective. / Ph. D.

Page generated in 0.028 seconds