181 |
Analysing the spatial persistence of population and wealth during Apartheid / Pieter Du Toit NiemandNiemand, Pieter Du Toit January 2015 (has links)
This dissertation undertakes an analysis of the spatial persistence of population in South Africa over the period 1911 to 2011. A comprehensive review is given of the history and development of geographical economics in order to understand the dynamics of the forces of agglomeration. In addition the history of the development of South Africa is discussed and special focus is directed to the geographical, economic and political factors that gave rise to the unequal distribution of population and wealth in the country. In the empirical analysis Zipf’s law was applied and it was found that South Africa’s population was more evenly spread in 1911. With the application of the law to the 2011 data the Pareto exponent of the OLS log-linear regression indicated that urban agglomeration was more persistent. Although this might indicate that apartheid did not influence agglomeration in South Africa it is argued that the nature of the agglomeration was in fact influenced by restrictive measures placed on the urbanisation of the population and industrial decentralisation policies. It is indicated that the apartheid policy altered the equilibrium spatial distribution of population and wealth which lead to a smaller than optimal primate and second largest magisterial districts, too many secondary cities of similar size, and also too many small and uneconomical rural settlements. / MCom (Economics), North-West University, Potchefstroom Campus, 2015
|
182 |
CUSUM procedures based on sequential ranks / Corli van ZylVan Zyl, Corli January 2015 (has links)
The main objective of this dissertation is the development of CUSUM procedures
based on signed and unsigned sequential ranks. These CUSUMs can be
applied to detect changes in the location or dispersion of a process. The signed
and unsigned sequential rank CUSUMs are distribution-free and robust against the
effect of outliers in the data. The only assumption that these CUSUMs require is
that the in-control distribution is symmetric around a known location parameter.
These procedures specifically do not require the existence of any higher order moments.
Another advantage of these CUSUMs is that Monte Carlo simulation can
readily be applied to deliver valid estimates of control limits, irrespective of what
the underlying distribution may be.
Other objectives of this dissertation include a brief discussion of the results
and refinements of the CUSUM in the literature. We justify the use of a signed
sequential rank statistic. Also, we evaluate the relative efficiency of the suggested
procedure numerically and provide three real-world applications from the engineering
and financial industries. / MSc (Risk Analysis), North-West University, Potchefstroom Campus, 2015
|
183 |
Computing with functions in two dimensionsTownsend, Alex January 2014 (has links)
New numerical methods are proposed for computing with smooth scalar and vector valued functions of two variables defined on rectangular domains. Functions are approximated to essentially machine precision by an iterative variant of Gaussian elimination that constructs near-optimal low rank approximations. Operations such as integration, differentiation, and function evaluation are particularly efficient. Explicit convergence rates are shown for the singular values of differentiable and separately analytic functions, and examples are given to demonstrate some paradoxical features of low rank approximation theory. Analogues of QR, LU, and Cholesky factorizations are introduced for matrices that are continuous in one or both directions, deriving a continuous linear algebra. New notions of triangular structures are proposed and the convergence of the infinite series associated with these factorizations is proved under certain smoothness assumptions. A robust numerical bivariate rootfinder is developed for computing the common zeros of two smooth functions via a resultant method. Using several specialized techniques the algorithm can accurately find the simple common zeros of two functions with polynomial approximants of high degree (≥ 1,000). Lastly, low rank ideas are extended to linear partial differential equations (PDEs) with variable coefficients defined on rectangles. When these ideas are used in conjunction with a new one-dimensional spectral method the resulting solver is spectrally accurate and efficient, requiring O(n<sup>2</sup>) operations for rank $1$ partial differential operators, O(n<sup>3</sup>) for rank 2, and O(n<sup>4</sup>) for rank &geq,3 to compute an n x n matrix of bivariate Chebyshev expansion coefficients for the PDE solution. The algorithms in this thesis are realized in a software package called Chebfun2, which is an integrated two-dimensional component of Chebfun.
|
184 |
Large scale optimization methods for metric and kernel learningJain, Prateek 06 November 2014 (has links)
A large number of machine learning algorithms are critically dependent on the underlying distance/metric/similarity function. Learning an appropriate distance function is therefore crucial to the success of many methods. The class of distance functions that can be learned accurately is characterized by the amount and type of supervision available to the particular application. In this thesis, we explore a variety of such distance learning problems using different amounts/types of supervision and provide efficient and scalable algorithms to learn appropriate distance functions for each of these problems. First, we propose a generic regularized framework for Mahalanobis metric learning and prove that for a wide variety of regularization functions, metric learning can be used for efficiently learning a kernel function incorporating the available side-information. Furthermore, we provide a method for fast nearest neighbor search using the learned distance/kernel function. We show that a variety of existing metric learning methods are special cases of our general framework. Hence, our framework also provides a kernelization scheme and fast similarity search scheme for such methods. Second, we consider a variation of our standard metric learning framework where the side-information is incremental, streaming and cannot be stored. For this problem, we provide an efficient online metric learning algorithm that compares favorably to existing methods both theoretically and empirically. Next, we consider a contrasting scenario where the amount of supervision being provided is extremely small compared to the number of training points. For this problem, we consider two different modeling assumptions: 1) data lies on a low-dimensional linear subspace, 2) data lies on a low-dimensional non-linear manifold. The first assumption, in particular, leads to the problem of matrix rank minimization over polyhedral sets, which is a problem of immense interest in numerous fields including optimization, machine learning, computer vision, and control theory. We propose a novel online learning based optimization method for the rank minimization problem and provide provable approximation guarantees for it. The second assumption leads to our geometry-aware metric/kernel learning formulation, where we jointly model the metric/kernel over the data along with the underlying manifold. We provide an efficient alternating minimization algorithm for this problem and demonstrate its wide applicability and effectiveness by applying it to various machine learning tasks such as semi-supervised classification, colored dimensionality reduction, manifold alignment etc. Finally, we consider the task of learning distance functions under no supervision, which we cast as a problem of learning disparate clusterings of the data. To this end, we propose a discriminative approach and a generative model based approach and we provide efficient algorithms with convergence guarantees for both the approaches. / text
|
185 |
Optimal Tests for SymmetryCassart, Delphine 01 June 2007 (has links)
Dans ce travail, nous proposons des procédures de test paramétriques et nonparamétrique localement et asymptotiquement optimales au sens de Hajek et Le Cam, pour trois modèles d'asymétrie.
La construction de modèles d'asymétrie est un sujet de recherche qui a connu un grand développement ces dernières années, et l'obtention des tests optimaux (pour trois modèles différents) est une étape essentielle en vue de leur mise en application.
Notre approche est fondée sur la théorie de Le Cam d'une part, pour obtenir les propriétés de normalité asymptotique, bases de la construction des tests paramétriques optimaux, et la théorie de Hajek d'autre part, qui, via un principe d'invariance permet d'obtenir les procédures non-paramétriques.
Nous considérons dans ce travail deux classes de distributions univariées asymétriques, l'une fondée sur un développement d'Edgeworth (décrit dans le Chapitre 1), et l'autre construite en utilisant un paramètre d'échelle différent pour les valeurs positives et négatives (le modèle de Fechner, décrit dans le Chapitre 2).
Le modèle d'asymétrie elliptique étudié dans le dernier chapitre est une généralisation multivariée du modèle du Chapitre 2.
Pour chacun de ces modèles, nous proposons de tester l'hypothèse de symétrie par rapport à un centre fixé, puis par rapport à un centre non spécifié.
Après avoir décrit le modèle pour lequel nous construisons les procédures optimales, nous obtenons la propriété de normalité locale asymptotique. A partir de ce résultat, nous sommes capable de construire les tests paramétriques localement et asymptotiquement optimaux. Ces tests ne sont toutefois valides que si la densité sous-jacente f est correctement spécifiée. Ils ont donc le mérite de déterminer les bornes d'efficacité paramétrique, mais sont difficilement applicables.
Nous adaptons donc ces tests afin de pouvoir tester les hypothèses de symétrie par rapport à un centre fixé ou non, lorsque la densité sous-jacente est considérée comme un paramètre de nuisance.
Les tests que nous obtenons restent localement et asymptotiquement optimaux sous f, mais restent valides sous une large classe de densités.
A partir des propriétés d'invariance du sous-modèle identifié par l'hypothèse nulle, nous obtenons les tests de rangs signés localement et asymptotiquement optimaux sous f, et valide sous une vaste classe de densité. Nous présentons en particulier, les tests fondés sur les scores normaux (ou tests de van der Waerden), qui sont optimaux sous des hypothèses Gaussiennes, tout en étant valides si cette hypothèse n'est pas vérifiée.
Afin de comparer les performances des tests paramétriques et non paramétriques présentés, nous calculons les efficacités asymptotiques relatives des tests non paramétriques par rapport aux tests pseudo-Gaussiens, sous une vaste classe de densités non-Gaussiennes, et nous proposons quelques simulations.
|
186 |
Characteristics of High School Girls which May Lead to Early MarriageWeaver, Hazel Stewart, 1960- 01 1900 (has links)
The problem of this study was to isolate some of the characteristics of tenth-grade girls which may lead to early marriage. The characteristics studied were: sibling rank, influence of a broken home, parents* education and occupations, mental ability, aptitude, scholastic achievement, study habits and attitudes, and personal problems identified by the subjects. A further problem of the study was the effectiveness of each of the characteristics in predicting the marriage of high school girls.
|
187 |
An analysis of the impact of data errors on backorder rates in the F404 engine systemBurson, Patrick A. R. 03 1900 (has links)
Approved for public release; distribution in unlimited. / In the management of the U.S. Naval inventory, data quality is of critical importance. Errors in major inventory databases contribute to increased operational costs, reduced revenue, and loss of confidence in the reliability of the supply system. Maintaining error-free databases is not a realistic objective. Data-quality efforts must be prioritized to ensure that limited resources are allocated to achieve the maximum benefit. This thesis proposes a methodology to assist the Naval Inventory Control Point in the prioritization of its data-quality efforts. By linking data errors to Naval inventory performance metrics, statistical testing is used to identify errors that have the greatest adverse impact on inventory operations. By focusing remediation efforts on errors identified in this manner, the Navy can best use its limited resources devoted to improvement of data quality. Two inventory performance metrics are considered: Supply Material Availability (SMA), an established metric in Naval inventory management; and Backorder Persistence Metric (BPM), which is developed in the thesis. Backorder persistence measures the duration of time that the ratio of backorders to quarterly demand exceeds a threshold value. Both metrics can be used together to target remediation on reducing shortage costs and improving inventory system performance. / Lieutenant Commander, Supply Corps, United States Navy
|
188 |
Understanding the Holocaust: Ernest Becker and the "Heroic Nazi"Martin, Stephen 20 December 2009 (has links)
This paper examines the power and limitations of historical analysis in regards to explaining the Holocaust and in particular the widespread consent to the Nazi program. One of the primary limitations that emerges is an inability of historians to fully engage other social sciences to offer a more comprehensive explanation as to why so many Germans engaged in what we would consider an “evil†enterprise. In that regard, I offer the work of Ernest Becker, a social anthropologist, whose work provides a framework for understanding history as a succession of attempts by man to create societies that generate meaning through various heroic quests that defy man's finite existence, yet often result in carnage. Combining Becker's theoretical framework with the rich historical evidence specific to the Holocaust provides a much richer understanding of both Becker's work and why the Holocaust happened.
|
189 |
Decoding of block and convolutional codes in rank metric / Décodage des codes en bloc et des codes convolutifs en métrique rangWachter-Zeh, Antonia 04 October 2013 (has links)
Les code en métrique rang attirent l’attention depuis quelques années en raison de leur application possible au codage réseau linéaire aléatoire (random linear network coding), à la cryptographie à clé publique, au codage espace-temps et aux systèmes de stockage distribué. Une construction de codes algébriques en métrique rang de cardinalité optimale a été introduite par Delsarte, Gabidulin et Roth il y a quelques décennies. Ces codes sont considérés comme l’équivalent des codes de Reed – Solomon et ils sont basés sur l’évaluation de polynômes linéarisés. Ils sont maintenant appelés les codes de Gabidulin. Cette thèse traite des codes en bloc et des codes convolutifs en métrique rang avec l’objectif de développer et d’étudier des algorithmes de décodage efficaces pour ces deux classes de codes. Après une introduction dans le chapitre 1, le chapitre 2 fournit une introduction rapide aux codes en métrique rang et leurs propriétés. Dans le chapitre 3, on considère des approches efficaces pour décoder les codes de Gabidulin. Lapremière partie de ce chapitre traite des algorithmes rapides pour les opérations sur les polynômes linéarisés. La deuxième partie de ce chapitre résume tout d’abord les techniques connues pour le décodage jusqu’à la moitié de la distance rang minimale (bounded minimum distance decoding) des codes de Gabidulin, qui sont basées sur les syndromes et sur la résolution d’une équation clé. Ensuite, nous présentons et nous prouvons un nouvel algorithme efficace pour le décodage jusqu’à la moitié de la distance minimale des codes de Gabidulin. Le chapitre 4 est consacré aux codes de Gabidulin entrelacés et à leur décodage au-delà de la moitié de la distance rang minimale. Dans ce chapitre, nous décrivons d’abord les deux approches connues pour le décodage unique et nous tirons une relation entre eux et leurs probabilités de défaillance. Ensuite, nous présentons un nouvel algorithme de décodage des codes de Gabidulin entrelacés basé sur l’interpolation des polynômes linéarisés. Nous prouvons la justesse de ses deux étapes principales — l’interpolation et la recherche des racines — et montrons que chacune d’elles peut être effectuée en résolvant un système d’équations linéaires. Jusqu’à présent, aucun algorithme de décodage en liste en temps polynomial pour les codes de Gabidulin n’est connu et en fait il n’est même pas clair que cela soit possible. Cela nous a motivé à étudier, dans le chapitre 5, les possibilités du décodage en liste en temps polynomial des codes en métrique rang. Cette analyse est effectuée par le calcul de bornes sur la taille de la liste des codes en métriques rang en général et des codes de Gabidulin en particulier. Étonnamment, les trois nouvelles bornes révèlent toutes un comportement des codes en métrique rang qui est complètement différent de celui des codes en métrique de Hamming. Enfin, dans le chapitre 6, on introduit des codes convolutifs en métrique rang. Ce qui nous motive à considérer ces codes est le codage réseau linéaire aléatoire multi-shot, où le réseau inconnu varie avec le temps et est utilisé plusieurs fois. Les codes convolutifs créent des dépendances entre les utilisations différentes du réseau aun de se adapter aux canaux difficiles. Basé sur des codes en bloc en métrique rang (en particulier les codes de Gabidulin), nous donnons deux constructions explicites des codes convolutifs en métrique rang. Les codes en bloc sous-jacents nous permettent de développer un algorithme de décodage des erreurs et des effacements efficace pour la deuxième construction, qui garantit de corriger toutes les séquences d’erreurs de poids rang jusqu’à la moitié de la distance rang active des lignes. Un résumé et un aperçu des problèmes futurs de recherche sont donnés à la fin de chaque chapitre. Finalement, le chapitre 7 conclut cette thèse. / Rank-metric codes recently attract a lot of attention due to their possible application to network coding, cryptography, space-time coding and distributed storage. An optimal-cardinality algebraic code construction in rank metric was introduced some decades ago by Delsarte, Gabidulin and Roth. This Reed–Solomon-like code class is based on the evaluation of linearized polynomials and is nowadays called Gabidulin codes. This dissertation considers block and convolutional codes in rank metric with the objective of designing and investigating efficient decoding algorithms for both code classes. After giving a brief introduction to codes in rank metric and their properties, we first derive sub-quadratic-time algorithms for operations with linearized polynomials and state a new bounded minimum distance decoding algorithm for Gabidulin codes. This algorithm directly outputs the linearized evaluation polynomial of the estimated codeword by means of the (fast) linearized Euclidean algorithm. Second, we present a new interpolation-based algorithm for unique and (not necessarily polynomial-time) list decoding of interleaved Gabidulin codes. This algorithm decodes most error patterns of rank greater than half the minimum rank distance by efficiently solving two linear systems of equations. As a third topic, we investigate the possibilities of polynomial-time list decoding of rank-metric codes in general and Gabidulin codes in particular. For this purpose, we derive three bounds on the list size. These bounds show that the behavior of the list size for both, Gabidulin and rank-metric block codes in general, is significantly different from the behavior of Reed–Solomon codes and block codes in Hamming metric, respectively. The bounds imply, amongst others, that there exists no polynomial upper bound on the list size in rank metric as the Johnson bound in Hamming metric, which depends only on the length and the minimum rank distance of the code. Finally, we introduce a special class of convolutional codes in rank metric and propose an efficient decoding algorithm for these codes. These convolutional codes are (partial) unit memory codes, built upon rank-metric block codes. This structure is crucial in the decoding process since we exploit the efficient decoders of the underlying block codes in order to decode the convolutional code.
|
190 |
As 2-álgebras de Lie simples de posto toral 3 / Simple Lie 2-algebras of toral rank 3Guevara, Carlos Rafael Payares 05 December 2016 (has links)
Neste trabalho estudamos as 2-álgebras de Lie simples, de dimensão finita e de posto toral 3, sobre um corpo algebricamente fechado de característica 2. Nós conjecturamos que a única 2-álgebra de Lie simples de este tipo é W(1, 3). Assim, nosso principal objetivo é verificar a veracidade desta conjectura para estas álgebras de pequenas dimensões. Como resultados, provamos que esta conjectura é certa para todas estes álgebras de dimensão menor ou igual a 16, e também em alguns casos especiais quando a dimensão é 17. / In this work we study the simple Lie 2-algebras of finite dimension, and toral rank 3 over an algebraically closed field characteristic 2. We surmise that the only simple Lie 2-algebra of this type is W(1, 3). So, our main objective is to study the truthful of this conjecture for these algebras of small dimensions. As a result, we prove that this conjecture is true for all these algebras less than or equal to 16 dimension, and also in some special cases when the dimension is 17.
|
Page generated in 0.0457 seconds