111 |
An Evaluation of Traffic Matrix Estimation Techniques for Large-Scale IP NetworksAdelani, Titus Olufemi 09 February 2010 (has links)
The information on the volume of traffic flowing between all possible origin and destination pairs in an IP network during a given period of time is generally referred to as traffic matrix (TM). This information, which is very important for various traffic engineering tasks, is very costly and difficult to obtain on large operational IP network, consequently it is often inferred from readily available link load measurements.
In this thesis, we evaluated 5 TM estimation techniques, namely Tomogravity (TG), Entropy Maximization (EM), Quadratic Programming (QP), Linear Programming (LP) and Neural Network (NN) with gravity and worst-case bound (WCB) initial estimates. We found that the EM technique performed best, consistently, in most of our simulations and that the gravity model yielded better initial estimates than the WCB model. A hybrid of these techniques did not result in considerable decrease in estimation errors. We, however, achieved most significant reduction in errors by combining iterative proportionally-fitted estimates with the EM technique. Therefore, we propose this technique as a viable approach for estimating the traffic matrix of large-scale IP networks.
|
112 |
Necessary and sufficient conditions in the problem of optimal investment in incomplete marketsKramkov, Dimitrij O., Schachermayer, Walter January 2001 (has links) (PDF)
Following [10] we continue the study of the problem of expected utility maximization in incomplete markets. Our goal is to find minimal conditions on a model and a utility function for the validity of several key assertions of the theory to hold true. In [10] we proved that a minimal condition on the utility function alone, i.e. a minimal market independent condition, is that the asymptotic elasticity of the utility function is strictly less than 1. In this paper we show that a necessary and sufficient condition on both, the utility function and the model, is that the value function of the dual problem is finite. (authors' abstract) / Series: Working Papers SFB "Adaptive Information Systems and Modelling in Economics and Management Science"
|
113 |
Time series analysis of Saudi Arabia oil production dataAlbarrak, Abdulmajeed Barrak 14 December 2013 (has links)
Saudi Arabia is the largest petroleum producer and exporter in the world. Saudi Arabian
economy hugely depends on production and export of oil. This motivates us to do research on oil
production of Saudi Arabia. In our research the prime objective is to find the most appropriate
models for analyzing Saudi Arabia oil production data. Initially we think of considering
integrated autoregressive moving average (ARIMA) models to fit the data. But most of the
variables under study show some kind of volatility and for this reason we finally decide to
consider autoregressive conditional heteroscedastic (ARCH) models for them. If there is no
ARCH effect, it will automatically become an ARIMA model. But the existence of missing
values for almost each of the variable makes the analysis part complicated since the estimation of
parameters in an ARCH model does not converge when observations are missing. As a remedy
to this problem we estimate missing observations first. We employ the expectation maximization
(EM) algorithm for estimating the missing values. But since our data are time series data, any
simple EM algorithm is not appropriate for them. There is also evidence of the presence of
outliers in the data. Therefore we finally employ robust regression least trimmed squares (LTS) based EM algorithm to estimate the missing values. After the estimation of missing values we
employ the White test to select the most appropriate ARCH models for all sixteen variables
under study. Normality test on resulting residuals is performed for each of the variable to check
the validity of the fitted model. / ARCH/GARCH models, outliers and robustness : tests for normality and estimation of missing values in time series -- Outlier analysis and estimation of missing values by robust EM algorithm for Saudi Arabia oil production data -- Selection of ARCH models for Saudi Arabia oil production data. / Department of Mathematical Sciences
|
114 |
An Evaluation of Traffic Matrix Estimation Techniques for Large-Scale IP NetworksAdelani, Titus Olufemi 09 February 2010 (has links)
The information on the volume of traffic flowing between all possible origin and destination pairs in an IP network during a given period of time is generally referred to as traffic matrix (TM). This information, which is very important for various traffic engineering tasks, is very costly and difficult to obtain on large operational IP network, consequently it is often inferred from readily available link load measurements.
In this thesis, we evaluated 5 TM estimation techniques, namely Tomogravity (TG), Entropy Maximization (EM), Quadratic Programming (QP), Linear Programming (LP) and Neural Network (NN) with gravity and worst-case bound (WCB) initial estimates. We found that the EM technique performed best, consistently, in most of our simulations and that the gravity model yielded better initial estimates than the WCB model. A hybrid of these techniques did not result in considerable decrease in estimation errors. We, however, achieved most significant reduction in errors by combining iterative proportionally-fitted estimates with the EM technique. Therefore, we propose this technique as a viable approach for estimating the traffic matrix of large-scale IP networks.
|
115 |
Inventory Decisions for the Price Setting Retailer: Extensions to the EOQ SettingRamasra, Raynier January 2011 (has links)
Practical inventory settings often include multiple generations of the same product on
hand. New products often arrive before old stock is exhausted, but most inventory models
do not account for this. Such a setting gives rise to the possibility of inter-generational substitution between products. We study a retailer that stocks two product generations and we show that from a cost perspective the retailer is better off stocking only one generation. We proceed with a profit scheme and develop a price-setting profit maximization model, proving that in one and two generation profit models there exists a unique solution. We
use the profit model to show that there are cases where it is more profitable to stock two generations. We discuss utility and preference extensions to the profit model and present the general n-product case.
|
116 |
Towards Finding Optimal Mixture Of Subspaces For Data ClassificationMusa, Mohamed Elhafiz Mustafa 01 October 2003 (has links) (PDF)
In pattern recognition, when data has different structures in different parts of the
input space, fitting one global model can be slow and inaccurate. Learning methods
can quickly learn the structure of the data in local regions, consequently, offering faster
and more accurate model fitting. Breaking training data set into smaller subsets may
lead to curse of dimensionality problem, as a training sample subset may not be enough
for estimating the required set of parameters for the submodels. Increasing the size of
training data may not be at hand in many situations. Interestingly, the data in local
regions becomes more correlated. Therefore, by decorrelation methods we can reduce
data dimensions and hence the number of parameters. In other words, we can find
uncorrelated low dimensional subspaces that capture most of the data variability. The
current subspace modelling methods have proved better performance than the global
modelling methods for the given type of training data structure. Nevertheless these
methods still need more research work as they are suffering from two limitations
2 There is no standard method to specify the optimal number of subspaces.
² / There is no standard method to specify the optimal dimensionality for each
subspace.
In the current models these two parameters are determined beforehand. In this dissertation
we propose and test algorithms that try to find a suboptimal number of
principal subspaces and a suboptimal dimensionality for each principal subspaces automatically.
|
117 |
Finding the Maximizers of the Information Divergence from an Exponential Family / Das Auffinden der Maximierer der Informationsdivergenz von einer ExponentialfamilieRauh, Johannes 19 October 2011 (has links) (PDF)
The subject of this thesis is the maximization of the information divergence from an exponential family on a finite set, a problem first formulated by Nihat Ay. A special case is the maximization of the mutual information or the multiinformation between different parts of a composite system.
My thesis contributes mainly to the mathematical aspects of the optimization problem. A reformulation is found that relates the maximization of the information divergence with the maximization of an entropic quantity, defined on the normal space of the exponential family. This reformulation simplifies calculations in concrete cases and gives theoretical insight about the general problem.
A second emphasis of the thesis is on examples that demonstrate how the theoretical results can be applied in particular cases. Third, my thesis contain first results on the characterization of exponential families with a small maximum value of the information divergence.
|
118 |
Estimating parameters in markov models for longitudinal studies with missing data or surrogate outcomes /Yeh, Hung-Wen. Chan, Wenyaw. January 2007 (has links)
Thesis (Ph. D.)--University of Texas Health Science Center at Houston, School of Public Health, 2007. / Includes bibliographical references (leaves 58-59).
|
119 |
Object and concept recognition for content-based image retrieval /Li, Yi, January 2005 (has links)
Thesis (Ph. D.)--University of Washington, 2005. / Vita. Includes bibliographical references (p. 82-87).
|
120 |
Bayesian Networks and Gaussian Mixture Models in Multi-Dimensional Data Analysis with Application to Religion-Conflict DataJanuary 2012 (has links)
abstract: This thesis examines the application of statistical signal processing approaches to data arising from surveys intended to measure psychological and sociological phenomena underpinning human social dynamics. The use of signal processing methods for analysis of signals arising from measurement of social, biological, and other non-traditional phenomena has been an important and growing area of signal processing research over the past decade. Here, we explore the application of statistical modeling and signal processing concepts to data obtained from the Global Group Relations Project, specifically to understand and quantify the effects and interactions of social psychological factors related to intergroup conflicts. We use Bayesian networks to specify prospective models of conditional dependence. Bayesian networks are determined between social psychological factors and conflict variables, and modeled by directed acyclic graphs, while the significant interactions are modeled as conditional probabilities. Since the data are sparse and multi-dimensional, we regress Gaussian mixture models (GMMs) against the data to estimate the conditional probabilities of interest. The parameters of GMMs are estimated using the expectation-maximization (EM) algorithm. However, the EM algorithm may suffer from over-fitting problem due to the high dimensionality and limited observations entailed in this data set. Therefore, the Akaike information criterion (AIC) and the Bayesian information criterion (BIC) are used for GMM order estimation. To assist intuitive understanding of the interactions of social variables and the intergroup conflicts, we introduce a color-based visualization scheme. In this scheme, the intensities of colors are proportional to the conditional probabilities observed. / Dissertation/Thesis / M.S. Electrical Engineering 2012
|
Page generated in 0.0744 seconds