• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2616
  • 940
  • 381
  • 347
  • 331
  • 101
  • 66
  • 49
  • 40
  • 36
  • 34
  • 32
  • 31
  • 27
  • 26
  • Tagged with
  • 5988
  • 1458
  • 888
  • 730
  • 724
  • 703
  • 493
  • 493
  • 482
  • 451
  • 421
  • 414
  • 386
  • 366
  • 342
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1331

ROBUST ADAPTIVE BEAMFORMING WITH BROAD NULLS

Yudong, He, Xianghua, Yang, Jie, Zhou, Banghua, Zhou, Beibei, Shao 10 1900 (has links)
ITC/USA 2007 Conference Proceedings / The Forty-Third Annual International Telemetering Conference and Technical Exhibition / October 22-25, 2007 / Riviera Hotel & Convention Center, Las Vegas, Nevada / Robust adaptive beamforming using worst-case performance optimization is developed in recent years. It had good performance against array response errors, but it cannot reject strong interferences. In this paper, we propose a scheme for robust adaptive beamforming with broad nulls to reject strong interferences. We add a quadratic constraint to suppress the power of the array response over a spatial region of the interferences. The optimal weighting vector is then obtained by minimizing the power of the array output subject to quadratic constrains on the desired signal and interferences, respectively. We derive the formulations for the optimization problem and solve it efficiently using Newton recursive algorithm. Numerical examples are presented to compare the performances of the robust adaptive beamforming with no null constrains, sharp nulls and broad nulls. The results show its powerful ability to reject strong interferences.
1332

Convergence analysis of symmetric interpolatory subdivision schemes

Oloungha, Stephane B. 12 1900 (has links)
Thesis (PhD (Mathematics))--University of Stellenbosch, 2010. / Contains bibliography. / ENGLISH ABSTRACT: See full text for summary. / AFRIKAANSE OPSOMMING: Sien volteks vir opsomming
1333

Resource management in IP networks

Wahabi, Abdoul Rassaki 12 1900 (has links)
Thesis (MSc)--University of Stellenbosch, 2001. / ENGLISH ABSTRACT: lP networks offer scalability and flexibility for rapid deployment of value added lP services. However, with the increased demand and explosive growth of the Internet, carriers require a network infrastructure that is dependable, predictable, and offers consistent network performance. This thesis examines the functionality, performance and implementation aspects of the MPLS mechanisms to minimize the expected packet delay in MPLS networks. Optimal path selection and the assignment of bandwidth to those paths for minimizing the average packet delay are investigated. We present an efficient flow deviation algorithm (EFDA) which assigns a small amount of flow from a set of routes connecting each OD pair to the shortest path connecting the OD pair in the network. The flow is assigned in such a way that the network average packet delay is minimized. Bellman's algorithm is used to find the shortest routes between all OD pairs. The thesis studies the problem of determining the routes between an OD pair and assigning capacities to those routes. The EFDA algorithm iteratively determines the global minimum of the objective function. We also use the optimal flows to compute the optimal link capacities in both single and multirate networks. The algorithm has been applied to several examples and to different models of networks. The results are used to evaluate the performance of the EFDA algorithm and compare the optimal solutions obtained with different starting topologies and different techniques. They all fall within a close cost-performance range. They are all within the same range from the optimal solution as well. / AFRIKAANSE OPSOMMING: lP-netwerke voorsien die skaleerbaarheid en buigsaamheid vir die vinnige ontplooing van toegevoegde-waarde lP-dienste. Die vergrote aanvraag en eksplosiewe uitbreiding van die Internet benodig betroubare, voorspelbare en bestendige netwerkprestasie. Hierdie tesis ondersoek die funksionaliteit, prestasie en implementering van die MPLS(multiprotokoletiketskakel)- meganismes om die verwagte pakketvertraging te minimeer. Ons bespreek 'n doeltreffende algoritme vir vloei-afwyking (EFDA) wat 'n klein hoeveelheid vloei toewys uit die versameling van roetes wat elke OT(oorsprong-teiken)- paar verbind aan die kortste pad wat die OT-paar koppel. Die vloei word toegewys sodanig dat die netwerk se gemiddelde pakketvertraging geminimeer word. Bellman se algoritme word gebruik om die kortste roetes tussen alle OT-pare te bepaal. Die tesis bespreek die probleem van die bepaling van roetes tussen 'n OT-paar en die toewysing van kapasiteite aan sulke roetes. Die EFDA-algoritme bepaal die globale minimum iteratief. Ons gebruik ook optimale vloeie vir die berekening van die optimale skakelkapasiteite in beide enkel- en multikoers netwerke. Die algoritme is toegepas op verskeie voorbeelde en op verskillende netwerkmodelle. Die skakelkapasiteite word aangewend om die prestasie van die EFDAalgoritme te evalueer en dit te vergelyk met die optimale oplossings verkry met verskillende aanvangstopologieë en tegnieke. Die resultate val binne klein koste-prestasie perke wat ook na aan die optimale oplossing lê.
1334

BINARY GMSK: CHARACTERISTICS AND PERFORMANCE

Tsai, Kuang, Lui, Gee L. 10 1900 (has links)
International Telemetering Conference Proceedings / October 25-28, 1999 / Riviera Hotel and Convention Center, Las Vegas, Nevada / Gaussian Minimum Shift Keying (GMSK) is a form of Continuous Phase Modulation (CPM) whose spectral occupancy can be easily tailored to the available channel bandwidth by a suitable choice of signal parameters. The constant envelope of the GMSK signal enables it to corporate with saturated power amplifier without the spectral re-growth problem. This paper provides a quantitative synopsis of binary GMSK signals in terms of their bandwidth occupancy and coherent demodulation performance. A detailed account of how to demodulate such signals using the Viterbi Algorithm (VA) is given, along with analytical power spectral density (PSD) and computer simulated bit-error-rate (BER) results for various signal BT products. The effect of adjacent channel interference (ACI) is also quantified. Ideal synchronization for both symbol time and carrier phase is assumed.
1335

Sequential estimation in statistics and steady-state simulation

Tang, Peng 22 May 2014 (has links)
At the onset of the "Big Data" age, we are faced with ubiquitous data in various forms and with various characteristics, such as noise, high dimensionality, autocorrelation, and so on. The question of how to obtain accurate and computationally efficient estimates from such data is one that has stoked the interest of many researchers. This dissertation mainly concentrates on two general problem areas: inference for high-dimensional and noisy data, and estimation of the steady-state mean for univariate data generated by computer simulation experiments. We develop and evaluate three separate sequential algorithms for the two topics. One major advantage of sequential algorithms is that they allow for careful experimental adjustments as sampling proceeds. Unlike one-step sampling plans, sequential algorithms adapt to different situations arising from the ongoing sampling; this makes these procedures efficacious as problems become more complicated and more-delicate requirements need to be satisfied. We will elaborate on each research topic in the following discussion. Concerning the first topic, our goal is to develop a robust graphical model for noisy data in a high-dimensional setting. Under a Gaussian distributional assumption, the estimation of undirected Gaussian graphs is equivalent to the estimation of inverse covariance matrices. Particular interest has focused upon estimating a sparse inverse covariance matrix to reveal insight on the data as suggested by the principle of parsimony. For estimation with high-dimensional data, the influence of anomalous observations becomes severe as the dimensionality increases. To address this problem, we propose a robust estimation procedure for the Gaussian graphical model based on the Integrated Squared Error (ISE) criterion. The robustness result is obtained by using ISE as a nonparametric criterion for seeking the largest portion of the data that "matches" the model. Moreover, an l₁-type regularization is applied to encourage sparse estimation. To address the non-convexity of the objective function, we develop a sequential algorithm in the spirit of a majorization-minimization scheme. We summarize the results of Monte Carlo experiments supporting the conclusion that our estimator of the inverse covariance matrix converges weakly (i.e., in probability) to the latter matrix as the sample size grows large. The performance of the proposed method is compared with that of several existing approaches through numerical simulations. We further demonstrate the strength of our method with applications in genetic network inference and financial portfolio optimization. The second topic consists of two parts, and both concern the computation of point and confidence interval (CI) estimators for the mean µ of a stationary discrete-time univariate stochastic process X \equiv \{X_i: i=1,2,...} generated by a simulation experiment. The point estimation is relatively easy when the underlying system starts in steady state; but the traditional way of calculating CIs usually fails since the data encountered in simulation output are typically serially correlated. We propose two distinct sequential procedures that each yield a CI for µ with user-specified reliability and absolute or relative precision. The first sequential procedure is based on variance estimators computed from standardized time series applied to nonoverlapping batches of observations, and it is characterized by its simplicity relative to methods based on batch means and its ability to deliver CIs for the variance parameter of the output process (i.e., the sum of covariances at all lags). The second procedure is the first sequential algorithm that uses overlapping variance estimators to construct asymptotically valid CI estimators for the steady-state mean based on standardized time series. The advantage of this procedure is that compared with other popular procedures for steady-state simulation analysis, the second procedure yields significant reduction both in the variability of its CI estimator and in the sample size needed to satisfy the precision requirement. The effectiveness of both procedures is evaluated via comparisons with state-of-the-art methods based on batch means under a series of experimental settings: the M/M/1 waiting-time process with 90% traffic intensity; the M/H_2/1 waiting-time process with 80% traffic intensity; the M/M/1/LIFO waiting-time process with 80% traffic intensity; and an AR(1)-to-Pareto (ARTOP) process. We find that the new procedures perform comparatively well in terms of their average required sample sizes as well as the coverage and average half-length of their delivered CIs.
1336

Understanding the centralized-decentralized electrification paradigm

Levin, Todd 27 August 2014 (has links)
Two methodologies are presented for analyzing the choice between centralized and decentralized energy infrastructures from a least-cost perspective. The first of these develops a novel minimum spanning tree network algorithm to approximate the shortest-length network that connects a given fraction of total system population. This algorithm is used to identify high priority locations for decentralized electrification in 150 countries. The second methodology utilizes a mixed-integer programming framework to determine the least-cost combination of centralized and decentralized electricity infrastructure that is capable of serving demand throughout a given system. This methodology is demonstrated through a case study of Rwanda. The centralized-decentralized electrification paradigm is also approached from an energy security perspective, incorporating stochastic events and probabilistic parameters into a simulation model that is used to compare different development paths. The impact of explicitly modeling stochastic events as opposed to utilizing a conventional formulation is also considered Finally, a subsidy-free lighting cost curve is developed and a model is presented to compare the costs and benefits of three different financial mechanisms that can be employed to make capital intensive energy systems more accessible to rural populations. The optimal contract is determined on the basis of utility-maximization for a range of costs to the providing agency and a comprehensive single and multi-factor sensitivity analysis is performed.
1337

Accelerated algorithms for composite saddle-point problems and applications

He, Yunlong 12 January 2015 (has links)
This dissertation considers the composite saddle-point (CSP) problem which is motivated by real-world applications in the areas of machine learning and image processing. Two new accelerated algorithms for solving composite saddle-point problems are introduced. Due to the two-block structure of the CSP problem, it can be solved by any algorithm belonging to the block-decomposition hybrid proximal extragradient (BD-HPE) framework. The framework consists of a family of inexact proximal point methods for solving a general two-block structured monotone inclusion problem which, at every iteration, solves two prox sub-inclusions according to a certain relative error criterion. By exploiting the fact that the two prox sub-inclusions in the context of the CSP problem are equivalent to two composite convex programs, the first part of this dissertation proposes a new instance of the BD-HPE framework that approximately solves them using an accelerated gradient method. It is shown that this new instance has better iteration-complexity than the previous ones. The second part of this dissertation introduces a new algorithm for solving a special class of CSP problems. The new algorithm is a special instance of the hybrid proximal extragradient (HPE) framework in which a Nesterov's accelerated variant is used to approximately solve the prox subproblems. One of the advantages of the this method is that it works for any constant choice of proximal stepsize. Moreover, a suitable choice of the latter stepsize yields a method with the best known (accelerated inner) iteration complexity for the aforementioned class of saddle-point problems. Experiment results on both synthetic CSP problems and real-world problems show that the two method significantly outperform several state-of-the-art algorithms.
1338

Statistical Methods for Handling Intentional Inaccurate Responders

McQuerry, Kristen J. 01 January 2016 (has links)
In self-report data, participants who provide incorrect responses are known as intentional inaccurate responders. This dissertation provides statistical analyses for address intentional inaccurate responses in the data. Previous work with adolescent self-report, labeled survey participants who intentionally provide inaccurate answers as mischievous responders. This phenomenon also occurs in clinical research. For example, pregnant women who smoke may report that they are nonsmokers. Our advantage is that we do not solely have self-report answers and can verify responses with lab values. Currently, there is no clear method for handling these intentional inaccurate respondents when it comes to making statistical inferences. We propose a using an EM algorithm to account for the intentional behavior while maintaining all responses in the data. The performance of this model is evaluated using simulated data and real data. The strengths and weaknesses of the EM algorithm approach will be demonstrated.
1339

A behavioural approach to financial portfolio selection problem : an empirical study using heuristics

Grishina, Nina January 2014 (has links)
The behaviourally based portfolio selection problem with investor's loss aversion and risk aversion biases in portfolio choice under uncertainty are studied. The main results of this work are developed heuristic approaches for the prospect theory and cumulative prospect theory models proposed by Kahneman and Tversky in 1979 and 1992 as well as an empirical comparative analysis of these models and the traditional mean variance and index tracking models. The crucial assumption is that behavioural features of the (cumulative) prospect theory model provide better downside protection than traditional approaches to the portfolio selection problem. In this research the large scale computational results for the (cumulative) prospect theory model have been obtained. Previously, as far as we aware, only small laboratory (2-3 arti cial assets) tests has been presented in the literature. In order to investigate empirically the performance of the behaviourally based models, a differential evolution algorithm and a genetic algorithm which are capable to deal with large universe of assets have been developed. The speci c breeding and mutation as well as normalisation have been implemented in the algorithms. A tabulated comparative analysis of the algorithms' parameter choice is presented. The performance of the studied models have been tested out-of-sample in different conditions using the bootstrap method as well as simulation of the distribution of a growing market and simulation of the t-distribution with fat tails which characterises the dynamics of a decreasing or crisis market. A cardinality and CVaR constraints have been implemented to the basic mean variance and prospect theory models. The comparative analysis of the empirical results has been made using several criteria such as CPU time, ratio between mean portfolio return and standart deviation, mean portfolio return, standard deviation , VaR and CVaR as alternative measures of risk. The strong in uence of the reference point, loss aversion and risk aversion on the prospect theory model's results have been found. The prospect theory model with the reference point being the index is compared to the index tracking model. The portfolio diversi cation bene t has been found. However, the aggressive behaviour in terms of returns of the prospect theory model with the reference point being the index leads to worse performance of this model in a bearish market compared to the index tracking model. The tabulated comparative analysis of the performance of all studied models is provided in this research for in-sample and out-of-sample tests.
1340

Investigation of voltage- and light-sensitive ion channels

Fromme, Ulrich 29 February 2016 (has links)
No description available.

Page generated in 0.0301 seconds