• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 141
  • 20
  • 17
  • 10
  • 6
  • 5
  • 4
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • Tagged with
  • 262
  • 44
  • 39
  • 37
  • 32
  • 26
  • 22
  • 18
  • 18
  • 18
  • 17
  • 17
  • 16
  • 16
  • 16
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
261

Statistical physics of constraint satisfaction problems

Lamouchi, Elyes 10 1900 (has links)
La technique des répliques est une technique formidable prenant ses origines de la physique statistique, comme un moyen de calculer l'espérance du logarithme de la constante de normalisation d'une distribution de probabilité à haute dimension. Dans le jargon de physique, cette quantité est connue sous le nom de l’énergie libre, et toutes sortes de quantités utiles, telle que l’entropie, peuvent être obtenue de là par des dérivées. Cependant, ceci est un problème NP-difficile, qu’une bonne partie de statistique computationelle essaye de résoudre, et qui apparaît partout; de la théorie des codes, à la statistique en hautes dimensions, en passant par les problèmes de satisfaction de contraintes. Dans chaque cas, la méthode des répliques, et son extension par (Parisi et al., 1987), se sont prouvées fortes utiles pour illuminer quelques aspects concernant la corrélation des variables de la distribution de Gibbs et la nature fortement nonconvexe de son logarithme negatif. Algorithmiquement, il existe deux principales méthodologies adressant la difficulté de calcul que pose la constante de normalisation: a). Le point de vue statique: dans cette approche, on reformule le problème en tant que graphe dont les nœuds correspondent aux variables individuelles de la distribution de Gibbs, et dont les arêtes reflètent les dépendances entre celles-ci. Quand le graphe en question est localement un arbre, les procédures de message-passing sont garanties d’approximer arbitrairement bien les probabilités marginales de la distribution de Gibbs et de manière équivalente d'approximer la constante de normalisation. Les prédictions de la physique concernant la disparition des corrélations à longues portées se traduise donc, par le fait que le graphe soit localement un arbre, ainsi permettant l’utilisation des algorithmes locaux de passage de messages. Ceci va être le sujet du chapitre 4. b). Le point de vue dynamique: dans une direction orthogonale, on peut contourner le problème que pose le calcul de la constante de normalisation, en définissant une chaîne de Markov le long de laquelle, l’échantillonnage converge à celui selon la distribution de Gibbs, tel qu’après un certain nombre d’itérations (sous le nom de temps de relaxation), les échantillons sont garanties d’être approximativement générés selon elle. Afin de discuter des conditions dans lesquelles chacune de ces approches échoue, il est très utile d’être familier avec la méthode de replica symmetry breaking de Parisi. Cependant, les calculs nécessaires sont assez compliqués, et requièrent des notions qui sont typiquemment étrangères à ceux sans un entrainement en physique statistique. Ce mémoire a principalement deux objectifs : i) de fournir une introduction a la théorie des répliques, ses prédictions, et ses conséquences algorithmiques pour les problèmes de satisfaction de constraintes, et ii) de donner un survol des méthodes les plus récentes adressant la transition de phase, prédite par la méthode des répliques, dans le cas du problème k−SAT, à partir du point de vu statique et dynamique, et finir en proposant un nouvel algorithme qui prend en considération la transition de phase en question. / The replica trick is a powerful analytic technique originating from statistical physics as an attempt to compute the expectation of the logarithm of the normalization constant of a high dimensional probability distribution known as the Gibbs measure. In physics jargon this quantity is known as the free energy, and all kinds of useful quantities, such as the entropy, can be obtained from it using simple derivatives. The computation of this normalization constant is however an NP-hard problem that a large part of computational statistics attempts to deal with, and which shows up everywhere from coding theory, to high dimensional statistics, compressed sensing, protein folding analysis and constraint satisfaction problems. In each of these cases, the replica trick, and its extension by (Parisi et al., 1987), have proven incredibly successful at shedding light on keys aspects relating to the correlation structure of the Gibbs measure and the highly non-convex nature of − log(the Gibbs measure()). Algorithmic speaking, there exists two main methodologies addressing the intractability of the normalization constant: a) Statics: in this approach, one casts the system as a graphical model whose vertices represent individual variables, and whose edges reflect the dependencies between them. When the underlying graph is locally tree-like, local messagepassing procedures are guaranteed to yield near-exact marginal probabilities or equivalently compute Z. The physics predictions of vanishing long range correlation in the Gibbs measure, then translate into the associated graph being locally tree-like, hence permitting the use message passing procedures. This will be the focus of chapter 4. b) Dynamics: in an orthogonal direction, we can altogether bypass the issue of computing the normalization constant, by defining a Markov chain along which sampling converges to the Gibbs measure, such that after a number of iterations known as the relaxation-time, samples are guaranteed to be approximately sampled according to the Gibbs measure. To get into the conditions in which each of the two approaches is likely to fail (strong long range correlation, high energy barriers, etc..), it is very helpful to be familiar with the so-called replica symmetry breaking picture of Parisi. The computations involved are however quite involved, and come with a number of prescriptions and prerequisite notions (s.a. large deviation principles, saddle-point approximations) that are typically foreign to those without a statistical physics background. The purpose of this thesis is then twofold: i) to provide a self-contained introduction to replica theory, its predictions, and its algorithmic implications for constraint satisfaction problems, and ii) to give an account of state of the art methods in addressing the predicted phase transitions in the case of k−SAT, from both the statics and dynamics points of view, and propose a new algorithm takes takes these into consideration.
262

Selling "Dream Insurance" : The Standardized Test-preparation Industry's Search for Legitimacy, 1946-1989

Shepherd, Keegan 01 January 2011 (has links)
This thesis analyzes the origins, growth, and legitimization of the standardized test preparation ("test-prep") industry from the late 1940s to the end of the 1980s. In particular, this thesis focuses on the development of Stanley H. Kaplan Education Centers, Ltd. ("Kaplan") and The Princeton Review ("TPR"), and how these companies were most conducive in making the test-prep industry and standardized test-preparation itself socially acceptable. The standardized test most frequently discussed in this thesis is the Scholastic Aptitude Test ("SAT"), especially after its development came under the control of Educational Testing Service ("ETS"), but due attention is also given to the American College Testing Program ("ACT"). This thesis argues that certain test-prep companies gained legitimacy by successfully manipulating the interstices of American business and education, and brokered legitimacy through the rhetorical devices in their advertising. However, the legitimacy for the industry at-large was gained by default as neither the American government nor the American public could conclusively demonstrate that the industry conducted wholesale fraud. The thesis also argues that standardized test manufacturers were forced to engage in a cat-and-mouse game of pseudo-antagonism and adaptation with the test-prep industry once truth-in-testing laws prescribed transparent operations in standardized testing. These developments affect the current state of American standardized testing, its fluctuating but ubiquitous presence in the college admissions process, and the perpetuation of the test-prep industry decades after its origins.

Page generated in 0.0562 seconds