• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 139
  • 20
  • 17
  • 10
  • 6
  • 5
  • 4
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • Tagged with
  • 260
  • 44
  • 39
  • 37
  • 32
  • 26
  • 22
  • 18
  • 17
  • 17
  • 17
  • 17
  • 16
  • 16
  • 16
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
251

Využití SAT solverů v úloze optimalizace kombinačních obvodů / Application of SAT Solvers in Circuit Optimization Problem

Minařík, Vojtěch January 2019 (has links)
This thesis is focused on the task of application of SAT problem and it's modifications in area of evolution logic circuit development. This task is supposed to increase speed of evaluating candidate circuits by fitness function in cases where simulation usage fails. Usage of SAT and #SAT problems make evolution of complex circuits with high input number significantly faster. Implemented solution is based on #SAT problem. Two applications were implemented. They differ by the approach to checking outputs of circuit for wrong values. Time complexity of implemented algorithm depends on logical complexity of circuit, because it uses logical formulas and it's satisfiability to evaluate logic circuits.
252

Witnessing moral educators breaking (their) moral teachings, morality and self-reported crime : A study on adults in two countries, Sweden and Greece

Avratoglou, Alexandros January 2021 (has links)
The present paper extends previous research in terms of integrating social learning with morality theories, under the framework of moral educators’ and their conflicting moral influences. Specifically, this study aims to investigate the impact of witnessing moral educators breaking (their) moral teachings on individual’s morality and criminal behavior using a sample of two countries, Sweden and Greece, with similar population but entirely different cultural and social characteristics. We focus on three research questions regarding the correlations and (i) the explanatory influence of witnessing this conflict on moral emotions and values by gender and country, (ii) its impact on traditional crime by gender and country and (iii) the impact that witnessing the conflict and morality mutually have on traditional crime in the two countries. Our findings emerge in three key points. First, we found that witnessing moral educators influenced both moral emotions differentially in each country and gender, but only affected Swedish males’ moral values. Secondly, our results showed that witnessing moral educators can explain a moderate to small variance of traditional crime only for males in the two countries. Lastly, we found that witnessing moral educators together with morality can explain a moderate variance of traditional crime in the two countries, while gender is highly important for both countries. Findings are discussed in relation to theory and previous research. Future research is recommended in order to expand the understanding of the cultural and social learning processes that inhibit (im)moral contexts and subsequently affect morality and offending.
253

Redukce nedeterministických konečných automatů / Reduction of the Nondeterministic Finite Automata

Procházka, Lukáš January 2011 (has links)
Nondeterministic finite automaton is an important tool, which is used to process strings in many different areas of programming. It is important to try to reduce its size for increasing programs' effectiveness. However, this problem is computationally hard, so we need to search for new techniques. Basics of finite automata are described in this work. Some methods for their reduction are then introduced. Usable reduction algorithms are described in greater detail. Then they are implemented and tested. The test results are finally evaluated.
254

Apport de la sonde atomique tomographique dans l'étude structurale et magnétique du semi-conducteur magnétique 6H-SiC implanté avec du fer : vers un semi-conducteur magnétique à température ambiante / Contribution of the atom probe tomography to the structural and magnetic study of the magnetic 6H-SiC semiconductor implanted with iron : towards a magnetic semiconductor at room temperature

Diallo, Mamadou Lamine 16 June 2017 (has links)
Dans la réalisation de nouveaux composants innovants de la spintronique, de grands espoirs sont placés sur les semi-conducteurs magnétiques dilués (DMS). L’enjeu technologique est de développer des matériaux ayant à la fois des propriétés semi-conductrices et ferromagnétiques. Le but de ce travail est de réaliser une étude nanostructurale et magnétique détaillée du système Fe :SiC candidat prometteur pour devenir un semi-conducteur magnétique dilué à température ambiante. Cependant les propriétés magnétiques du matériau (6H-SiC) implanté avec des métaux de transitions (MT) dépendent fortement de sa microstructure (concentration et nature du dopant, précipitation du dopant…). Afin d’appréhender l’ensemble des propriétés nanostructurales et magnétiques, nous avons étudié le système Fe :SiC à l’échelle de l’atome en utilisant la sonde atomique tomographique (SAT) couplée à la spectrométrie Mössbauer 57Fe. Des monocristaux 6H-SiC (0001) de type p et n (~10+18/cm3) ont été multi-implantés en 56Fe et 57Fe à différentes énergies et différentes fluences conduisant à une concentration atomique de (6% et 4%) de 20 à 120 nm de la surface. Dans le cadre de ce travail, nous avons pu suivre l’effet de la nanostructure du système Fe :SiC en fonction de la concentration de fer et des températures d’implantation et de recuit. Nous avons établi de nouveaux résultats : nature et dimension des nanoparticules, évaluation précise du nombre d’atomes de fer dilué dans la matrice SiC. Les différentes contributions ferromagnétiques et paramagnétiques sont identifiées et clairement expliquées grâce au couplage de techniques expérimentales comme la SAT, la spectrométrie Mössbauer, la magnétométrie SQUID (Superconducting Quantum Interference Device). Nous avons réussi à déterminer des conditions optimales pour l’obtention d’un DMS à température ambiante. En effet dans les échantillons implantés 4% Fe à 380°C, plus de 90% des atomes de Fe sont dilués. Ces atomes de Fe dilués contribuent majoritairement aux propriétés ferromagnétiques mesurées par SQUID et par spectrométrie Mössbauer à 300 K. Ces différents résultats expérimentaux mettent en lumière la possibilité de réalisation d’un nouveau (DMS) à température ambiante / Great hopes are placed on diluted magnetic semiconductors (DMS) for new components of spintronics. The challenge is to develop materials with both semiconducting and ferromagnetic properties. The aim of this work is to carry out a detailed nanostructural and magnetic study of the Fe: SiC candidate promising system to become a magnetic semiconductor diluted at room temperature. However, the magnetic properties observed in (6H-SiC) implanted with transition metals (TM) depend strongly on the material microstructure (content and nature of the dopant, precipitation of the dopant, etc.). In order to understand all the nanostructural and magnetic mechanisms, we studied the Fe: SiC system at the atomic scale using atom probe tomography (APT) and Mössbauer spectrometry. p and n single crystalline 6H-SiC near (0001)-oriented samples were submitted to multi-step implantations with 56Fe and 57Fe ions at different energies and fluences leading to an iron concentration (Cat =6 and 4%) at a depth between 20 nm and 120 nm from the sample surface. In this work, we were able to follow the effect of the nanostructure of the Fe: SiC system as a function of the iron concentration and the temperatures of implantations and annealing. We have established new results: nature and size of the nanoparticles, precise evaluation of the number of iron atoms diluted in the SiC matrix. The ferromagnetic and paramagnetic contributions are identified and clearly explained by the coupling of experimental techniques such as APT, Mössbauer spectrometry, SQUID (Superconducting Quantum Interference Device) magnetometry. We were able to put the material in optimal conditions for obtaining a DMS at room temperature. Indeed, the implanted samples (4% Fe) at 380°C more than 90% Fe atoms were distributed homogeneously. These Fe atoms are the main source of the ferromagnetic properties measured by SQUID and Mössbauer spectrometry at 300 K. These experimental results highlight the possibility of obtaining a new (DMS) at room temperature.
255

Ion irradiation effects on high purity bcc Fe and model FeCr alloys / Effets de l'irradiation d’ions sur fer cubic centrée de haute pureté et FeCr alliage modèle

Bhattacharya, Arunodaya 09 December 2014 (has links)
Les alliages binaires FeCr de structure FM (ferrito/martensitique) sont actuellement les candidats les plus prometteurs comme matériaux de structure pour les réacteurs rapides refroidis au sodium et les futurs systèmes de fusion. Cependant, l'impact de Cr sur l'évolution de la microstructure irradié dans ces matériaux n’est pas bien compris. De plus, particulièrement pour les applications de fusion, le scénario de dégâts d'irradiation devrait être compliquée en outre par la présence de grandes quantités d'hélium produit par transmutation nucléaire (~ 10 appm He / dpa). Dans ce contexte, une étude spécifique des effets de l'irradiation ionique (influence du Cr et de l’He sur l’évolution de la microstructure) a été menée à 500 ° C sur une grande variété d’alliages FeCr de haute pureté (à teneur en Cr allant de ~ 3 wt.% À 14 wt.%) ainsi que sur du Fe pur. Les irradiations ont été effectuées à l'aide ions Fe, en mode mono-faisceau et mode dual-beam (irradiation par des ions Fe et co-implantation d'He) afin de pouvoir séparer le dommage ballistique de l’implantation couplée avec de l’He. Trois différentes doses ont été étudiées: dose élevée (157 dpa, avec 17 appm He / dpa), dose intermédiaire (45 dpa, avec 57 appm He / dpa) et in situ à faible dose (0,33 dpa, avec 3030 appm He / dpa). Les expériences ont été effectuées en utilisant l'installation JANNuS triple faisceau du CEA-Saclay et la plateforme in-situ du CSNSM-Orsay. L’évolution microstructurale des échantillons est essentiellement faite par MET, SAT et par EDS en mode STEM. Les principaux résultats sont les suivants : 1) L’étude détaillée de la population des cavités dans du Fe irradié à forte dose a révélé une forte réduction du gonflement du fait de l'ajout d’He. Une réduction drastique de la taille des cavités en dépit d’une densité plus élevée a été observée. Ce comportement a été observé tout au long zone irradié, jusqu’au pic d’endommagement. 2) La microstructure de cavités a également été étudiée dans les alliages FeCr irradiés en double faisceau à forte dose, et les résultats ont été comparés à ceux obtenus dans le Fe pur. L'analyse a été effectuée à une profondeur intermédiaire de 300 à 400 nm sous la surface (pour éviter les effets des interstitiels injectés et les effets de surface), correspondant à 128 dpa, 13 appm He / dpa. L’étude par TEM a montré que l'addition de petites quantités de Cr, aussi basse que 3wt.%, est très efficace pour réduire fortement le gonflement. Une réduction drastique de la taille des cavités a été mise en évidence. Par exemple, la taille moyenne des cavités pour l’alliage Fe3% Cr est de l’ordre de 0,9 nm alors qu’elle est voisine de 6,8 nm pour le Fe pur. De plus, la variation du gonflement en fonction de la teneur en Cr n’est pas monotone et présente un maximum local à environ 9 -. 10wt% Cr. 3) Le couplage des différentes techniques d’analyse, MET classique, STEM/EDS et analyse SAT appliqué à l’étude des alliages FeCr irradiés à faible et moyenne dose révèle la présence de zones enrichies en Cr sur le plan d’habitat des boucles de dislocation. Ce phénomène est relié à un phénomène de ségrégation induite par irradiation (RIS) de Cr au voisinage du coeur des boucles de dislocation. Quand la boucle se développe sous irradiation, les zones de ségrégation ne peuvent probablement pas se redissoudre du fait de la présence d'impuretés telles que le C. Lorsque les boucles sont imagées par MET, ces zones enrichies produisent des franges de contraste au voisinage du plan de la boucle. Une estimation quantitative de cet enrichissement a été déduit par STEM / EDS et l'SAT. La teneur en Cr dans ces domaines se situe entre 23 -. 35% par EDS et 22 % par SAT, ce qui est bien en dessous de la teneur en Cr de la phase α’ riche en Cr. / FeCr binary alloys are a simple representative of the reduced activation ferritic/martensitic (F-M) steels, which are currently the most promising candidates as structural materials for the sodium cooled fast reactors (SFR) and future fusion systems. However, the impact of Cr on the evolution of the irradiated microstructure in these materials is not well understood in these materials. Moreover, particularly for fusion applications, the radiation damage scenario is expected to be complicated further by the presence of large quantities of He produced by the nuclear transmutation (~ 10 appm He/dpa). Within this context, an elaborate ion irradiation study was performed at 500 °C on a wide variety of high purity FeCr alloys (with Cr content ranging from ~ 3 wt.% to 14 wt.%) and a bcc Fe, to probe in detail the influence of Cr and He on the evolution of microstructure. The irradiations were performed using Fe self-ions, in single beam mode and in dual beam mode (damage by Fe ions and co-implantation of He), to separate ballistic damage effect from the impact of simultaneous He injection. Three different dose ranges were studied: high dose (157 dpa, 17 appm He/dpa for the dual beam case), intermediate dose (45 dpa, 57 appm He/dpa for dual beam case) and in-situ low dose (0.33 dpa, 3030 appm He/dpa for the dual beam case). The experiments were performed at the JANNuS triple beam facility and dual beam in situ irradiation facility at CEA-Saclay and CSNSM, Orsay respectively. The microstructure was principally characterized by conventional TEM, APT and EDS in STEM mode. The main results are as follows: 1) A comparison of the cavity microstructure in high dose irradiated Fe revealed strong swelling reduction by the addition of He. It was achieved by a drastic reduction in cavity sizes and an increased number density. This behaviour was observed all along the damage depth, upto the damage peak. 2) Cavity microstrusture was also studied in the dual beam high dose irradiated FeCr alloys, and the results were compared to bcc Fe. The analysis was performed at an intermediate depth 300 – 400 nm below the surface (to avoid injected interstitial effect and surface effects), corresponding to 128 dpa, 13 appm He/dpa. TEM study revealed that the addition of small quantities of Cr, as low as 3wt.%, is highly efficient in strongly reducing void swelling. It was achieved by a drastic reduction of cavity sizes. For instance, average cavity size in Fe3%Cr was 0.9 nm as opposed to 6.8 nm in bcc Fe. Furthermore, the variation of void swelling as a function of Cr content is non-monotonic, with alocal maxima around 9 - 10wt.%Cr. 3) Coupling of conventional TEM, STEM/EDS and APT analysis on low and intermediate dose irradiated FeCr alloys revealed the presence of Cr enriched zones on the habit plane of the dislocation loops. This is expected to be due to radiation induced segregation (RIS) of Cr close to the core of the loops. As the loop grows under irradiation, the segregated areas are probably prevented from re-dissolution by impurity elements such as C. When imaged by TEM using classical diffraction contrast imaging techniques, these enriched zones produce displacement fringe contrast on the loop plane. A quantitative estimate of this enrichment was deduced by STEM/EDSand APT. The Cr content in these areas was between 23 - 35 at.% measured by EDS and 22 ± 2 at.% obtained by APT, whichis well below the Cr content of the Cr-rich α’ phase.
256

Speaker adaptation of deep neural network acoustic models using Gaussian mixture model framework in automatic speech recognition systems / Utilisation de modèles gaussiens pour l'adaptation au locuteur de réseaux de neurones profonds dans un contexte de modélisation acoustique pour la reconnaissance de la parole

Tomashenko, Natalia 01 December 2017 (has links)
Les différences entre conditions d'apprentissage et conditions de test peuvent considérablement dégrader la qualité des transcriptions produites par un système de reconnaissance automatique de la parole (RAP). L'adaptation est un moyen efficace pour réduire l'inadéquation entre les modèles du système et les données liées à un locuteur ou un canal acoustique particulier. Il existe deux types dominants de modèles acoustiques utilisés en RAP : les modèles de mélanges gaussiens (GMM) et les réseaux de neurones profonds (DNN). L'approche par modèles de Markov cachés (HMM) combinés à des GMM (GMM-HMM) a été l'une des techniques les plus utilisées dans les systèmes de RAP pendant de nombreuses décennies. Plusieurs techniques d'adaptation ont été développées pour ce type de modèles. Les modèles acoustiques combinant HMM et DNN (DNN-HMM) ont récemment permis de grandes avancées et surpassé les modèles GMM-HMM pour diverses tâches de RAP, mais l'adaptation au locuteur reste très difficile pour les modèles DNN-HMM. L'objectif principal de cette thèse est de développer une méthode de transfert efficace des algorithmes d'adaptation des modèles GMM aux modèles DNN. Une nouvelle approche pour l'adaptation au locuteur des modèles acoustiques de type DNN est proposée et étudiée : elle s'appuie sur l'utilisation de fonctions dérivées de GMM comme entrée d'un DNN. La technique proposée fournit un cadre général pour le transfert des algorithmes d'adaptation développés pour les GMM à l'adaptation des DNN. Elle est étudiée pour différents systèmes de RAP à l'état de l'art et s'avère efficace par rapport à d'autres techniques d'adaptation au locuteur, ainsi que complémentaire. / Differences between training and testing conditions may significantly degrade recognition accuracy in automatic speech recognition (ASR) systems. Adaptation is an efficient way to reduce the mismatch between models and data from a particular speaker or channel. There are two dominant types of acoustic models (AMs) used in ASR: Gaussian mixture models (GMMs) and deep neural networks (DNNs). The GMM hidden Markov model (GMM-HMM) approach has been one of the most common technique in ASR systems for many decades. Speaker adaptation is very effective for these AMs and various adaptation techniques have been developed for them. On the other hand, DNN-HMM AMs have recently achieved big advances and outperformed GMM-HMM models for various ASR tasks. However, speaker adaptation is still very challenging for these AMs. Many adaptation algorithms that work well for GMMs systems cannot be easily applied to DNNs because of the different nature of these models. The main purpose of this thesis is to develop a method for efficient transfer of adaptation algorithms from the GMM framework to DNN models. A novel approach for speaker adaptation of DNN AMs is proposed and investigated. The idea of this approach is based on using so-called GMM-derived features as input to a DNN. The proposed technique provides a general framework for transferring adaptation algorithms, developed for GMMs, to DNN adaptation. It is explored for various state-of-the-art ASR systems and is shown to be effective in comparison with other speaker adaptation techniques and complementary to them.
257

Automation of Operation and Testing for European Space Agency's OPS-SAT Mission

Hessinger, Felix January 2019 (has links)
This thesis presents a solution for mission operation automation in European Space Agency’s (ESA) OPS-SAT mission. To achieve this, the ESA internal mission automation system (MATIS) in combination with the mission control software (SCOS) are used. They control the satellite and all ground peripherals and programmes to enable fully automated and unsupervised satellite passes. The goal of this work is the transition from the existing manual operation, with a human operator watching over and controlling all systems, to an automated system. This system supports the operation engineer and replaces the operator himself. A large section of this thesis consists of the setup, configuration, integration of all programmes and virtual machines and testing of the MATIS software, as well as the Service Management Framework (SMF) which connects MATIS to non-MATIS applications like SCOS. During testing, many problems could be identified, not only OPS-SAT specific ones, but also general problems applying to all missions that consider using MATIS for future operation automation. These findings and bugs discovered during testing are reported to the responsible authorities and presented in this work. Further features of this thesis are the elaborations of the mission operation automation concept and the satellite pass concept, providing an in-depth view of the automation and passes of OPS-SAT as well as the general concepts and thoughts, which can be used by other missions to accelerate integration. An additional key feature of this thesis is the newly developed standard for operation notation in Excel, which has been achieved in close cooperation with the operation engineer. Furthermore, to accelerate the process of switching from manual to automated procedures, several converters have been developed iteratively with the new standard. These converters allow fast transformation from Excel to the procedure programming language called PLUTO used by MATIS. Not only do the results and converters of this work accelerate the procedure integration by 80%, they also deliver a more stable mission automation system that can be used by other missions as well. Operation automation reduces the operational costs for satellites and space missions significantly, as well as reducing the human error to a minimum. Therefore, this thesis is the first step towards a future with complete automation in the area of satellite operations. Without this automation, future satellite cluster configurations, like Starlink from SpaceX, will not be possible to put into practice, due to their high complexity, exceeding the comprehensibility and reaction time of humans.
258

The in silico prediction of foot-and-mouth disease virus (FMDV) epitopes on the South African territories (SAT)1, SAT2 and SAT3 serotypes

Mukonyora, Michelle 24 January 2017 (has links)
Foot-and-mouth disease (FMD) is a highly contagious and economically important disease that affects even-toed hoofed mammals. The FMD virus (FMDV) is the causative agent of FMD, of which there are seven clinically indistinguishable serotypes. Three serotypes, namely, South African Territories (SAT)1, SAT2 and SAT3 are endemic to southern Africa and are the most antigenically diverse among the FMDV serotypes. A negative consequence of this antigenic variation is that infection or vaccination with one virus may not provide immune protection from other strains or it may only confer partial protection. The identification of B-cell epitopes is therefore key to rationally designing cross-reactive vaccines that recognize the immunologically distinct serotypes present within the population. Computational epitope prediction methods that exploit the inherent physicochemical properties of epitopes in their algorithms have been proposed as a cost and time-effective alternative to the classical experimental methods. The aim of this project is to employ in silico epitope prediction programmes to predict B-cell epitopes on the capsids of the SAT serotypes. Sequence data for 18 immunologically distinct SAT1, SAT2 and SAT3 strains from across southern Africa were collated. Since, only one SAT1 virus has had its structure elucidated by X-ray crystallography (PDB ID: 2WZR), homology models of the 18 virus capsids were built computationally using Modeller v9.12. They were then subjected to energy minimizations using the AMBER force field. The quality of the models was evaluated and validated stereochemically and energetically using the PROMOTIF and ANOLEA servers respectively. The homology models were subsequently used as input to two different epitope prediction servers, namely Discotope1.0 and Ellipro. Only those epitopes predicted by both programmes were defined as epitopes. Both previously characterised and novel epitopes were predicted on the SAT strains. Some of the novel epitopes are located on the same loops as experimentally derived epitopes, while others are located on a putative novel antigenic site, which is located close to the five-fold axis of symmetry. A consensus set of 11 epitopes that are common on at least 15 out of 18 SAT strains was collated. In future work, the epitopes predicted in this study will be experimentally validated using mutagenesis studies. Those found to be true epitopes may be used in the rational design of broadly reactive SAT vaccines / Life and Consumer Sciences / M. Sc. (Life Sciences)
259

Statistical physics of constraint satisfaction problems

Lamouchi, Elyes 10 1900 (has links)
La technique des répliques est une technique formidable prenant ses origines de la physique statistique, comme un moyen de calculer l'espérance du logarithme de la constante de normalisation d'une distribution de probabilité à haute dimension. Dans le jargon de physique, cette quantité est connue sous le nom de l’énergie libre, et toutes sortes de quantités utiles, telle que l’entropie, peuvent être obtenue de là par des dérivées. Cependant, ceci est un problème NP-difficile, qu’une bonne partie de statistique computationelle essaye de résoudre, et qui apparaît partout; de la théorie des codes, à la statistique en hautes dimensions, en passant par les problèmes de satisfaction de contraintes. Dans chaque cas, la méthode des répliques, et son extension par (Parisi et al., 1987), se sont prouvées fortes utiles pour illuminer quelques aspects concernant la corrélation des variables de la distribution de Gibbs et la nature fortement nonconvexe de son logarithme negatif. Algorithmiquement, il existe deux principales méthodologies adressant la difficulté de calcul que pose la constante de normalisation: a). Le point de vue statique: dans cette approche, on reformule le problème en tant que graphe dont les nœuds correspondent aux variables individuelles de la distribution de Gibbs, et dont les arêtes reflètent les dépendances entre celles-ci. Quand le graphe en question est localement un arbre, les procédures de message-passing sont garanties d’approximer arbitrairement bien les probabilités marginales de la distribution de Gibbs et de manière équivalente d'approximer la constante de normalisation. Les prédictions de la physique concernant la disparition des corrélations à longues portées se traduise donc, par le fait que le graphe soit localement un arbre, ainsi permettant l’utilisation des algorithmes locaux de passage de messages. Ceci va être le sujet du chapitre 4. b). Le point de vue dynamique: dans une direction orthogonale, on peut contourner le problème que pose le calcul de la constante de normalisation, en définissant une chaîne de Markov le long de laquelle, l’échantillonnage converge à celui selon la distribution de Gibbs, tel qu’après un certain nombre d’itérations (sous le nom de temps de relaxation), les échantillons sont garanties d’être approximativement générés selon elle. Afin de discuter des conditions dans lesquelles chacune de ces approches échoue, il est très utile d’être familier avec la méthode de replica symmetry breaking de Parisi. Cependant, les calculs nécessaires sont assez compliqués, et requièrent des notions qui sont typiquemment étrangères à ceux sans un entrainement en physique statistique. Ce mémoire a principalement deux objectifs : i) de fournir une introduction a la théorie des répliques, ses prédictions, et ses conséquences algorithmiques pour les problèmes de satisfaction de constraintes, et ii) de donner un survol des méthodes les plus récentes adressant la transition de phase, prédite par la méthode des répliques, dans le cas du problème k−SAT, à partir du point de vu statique et dynamique, et finir en proposant un nouvel algorithme qui prend en considération la transition de phase en question. / The replica trick is a powerful analytic technique originating from statistical physics as an attempt to compute the expectation of the logarithm of the normalization constant of a high dimensional probability distribution known as the Gibbs measure. In physics jargon this quantity is known as the free energy, and all kinds of useful quantities, such as the entropy, can be obtained from it using simple derivatives. The computation of this normalization constant is however an NP-hard problem that a large part of computational statistics attempts to deal with, and which shows up everywhere from coding theory, to high dimensional statistics, compressed sensing, protein folding analysis and constraint satisfaction problems. In each of these cases, the replica trick, and its extension by (Parisi et al., 1987), have proven incredibly successful at shedding light on keys aspects relating to the correlation structure of the Gibbs measure and the highly non-convex nature of − log(the Gibbs measure()). Algorithmic speaking, there exists two main methodologies addressing the intractability of the normalization constant: a) Statics: in this approach, one casts the system as a graphical model whose vertices represent individual variables, and whose edges reflect the dependencies between them. When the underlying graph is locally tree-like, local messagepassing procedures are guaranteed to yield near-exact marginal probabilities or equivalently compute Z. The physics predictions of vanishing long range correlation in the Gibbs measure, then translate into the associated graph being locally tree-like, hence permitting the use message passing procedures. This will be the focus of chapter 4. b) Dynamics: in an orthogonal direction, we can altogether bypass the issue of computing the normalization constant, by defining a Markov chain along which sampling converges to the Gibbs measure, such that after a number of iterations known as the relaxation-time, samples are guaranteed to be approximately sampled according to the Gibbs measure. To get into the conditions in which each of the two approaches is likely to fail (strong long range correlation, high energy barriers, etc..), it is very helpful to be familiar with the so-called replica symmetry breaking picture of Parisi. The computations involved are however quite involved, and come with a number of prescriptions and prerequisite notions (s.a. large deviation principles, saddle-point approximations) that are typically foreign to those without a statistical physics background. The purpose of this thesis is then twofold: i) to provide a self-contained introduction to replica theory, its predictions, and its algorithmic implications for constraint satisfaction problems, and ii) to give an account of state of the art methods in addressing the predicted phase transitions in the case of k−SAT, from both the statics and dynamics points of view, and propose a new algorithm takes takes these into consideration.
260

Selling "Dream Insurance" : The Standardized Test-preparation Industry's Search for Legitimacy, 1946-1989

Shepherd, Keegan 01 January 2011 (has links)
This thesis analyzes the origins, growth, and legitimization of the standardized test preparation ("test-prep") industry from the late 1940s to the end of the 1980s. In particular, this thesis focuses on the development of Stanley H. Kaplan Education Centers, Ltd. ("Kaplan") and The Princeton Review ("TPR"), and how these companies were most conducive in making the test-prep industry and standardized test-preparation itself socially acceptable. The standardized test most frequently discussed in this thesis is the Scholastic Aptitude Test ("SAT"), especially after its development came under the control of Educational Testing Service ("ETS"), but due attention is also given to the American College Testing Program ("ACT"). This thesis argues that certain test-prep companies gained legitimacy by successfully manipulating the interstices of American business and education, and brokered legitimacy through the rhetorical devices in their advertising. However, the legitimacy for the industry at-large was gained by default as neither the American government nor the American public could conclusively demonstrate that the industry conducted wholesale fraud. The thesis also argues that standardized test manufacturers were forced to engage in a cat-and-mouse game of pseudo-antagonism and adaptation with the test-prep industry once truth-in-testing laws prescribed transparent operations in standardized testing. These developments affect the current state of American standardized testing, its fluctuating but ubiquitous presence in the college admissions process, and the perpetuation of the test-prep industry decades after its origins.

Page generated in 0.0233 seconds