• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 141
  • 50
  • 34
  • 15
  • 7
  • 4
  • 4
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 311
  • 77
  • 27
  • 26
  • 25
  • 23
  • 20
  • 19
  • 18
  • 18
  • 18
  • 17
  • 16
  • 15
  • 14
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
101

Geometria da informação : métrica de Fisher / Information geometry : Fisher's metric

Porto, Julianna Pinele Santos, 1990- 23 August 2018 (has links)
Orientador: João Eloir Strapasson / Dissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Matemática Estatística e Computação Científica / Made available in DSpace on 2018-08-23T13:44:50Z (GMT). No. of bitstreams: 1 Porto_JuliannaPineleSantos_M.pdf: 2346170 bytes, checksum: 9f8b7284329ef1eb2f319c2e377b7a3c (MD5) Previous issue date: 2013 / Resumo: A Geometria da Informação é uma área da matemática que utiliza ferramentas geométricas no estudo de modelos estatísticos. Em 1945, Rao introduziu uma métrica Riemanniana no espaço das distribuições de probabilidade usando a matriz de informação, dada por Ronald Fisher em 1921. Com a métrica associada a essa matriz, define-se uma distância entre duas distribuições de probabilidade (distância de Rao), geodésicas, curvaturas e outras propriedades do espaço. Desde então muitos autores veem estudando esse assunto, que está naturalmente ligado a diversas aplicações como, por exemplo, inferência estatística, processos estocásticos, teoria da informação e distorção de imagens. Neste trabalho damos uma breve introdução à geometria diferencial e Riemanniana e fazemos uma coletânea de alguns resultados obtidos na área de Geometria da Informação. Mostramos a distância de Rao entre algumas distribuições de probabilidade e damos uma atenção especial ao estudo da distância no espaço formado por distribuições Normais Multivariadas. Neste espaço, como ainda não é conhecida uma fórmula fechada para a distância e nem para a curva geodésica, damos ênfase ao cálculo de limitantes para a distância de Rao. Conseguimos melhorar, em alguns casos, o limitante superior dado por Calvo e Oller em 1990 / Abstract: Information Geometry is an area of mathematics that uses geometric tools in the study of statistical models. In 1945, Rao introduced a Riemannian metric on the space of the probability distributions using the information matrix provided by Ronald Fisher in 1921. With the metric associated with this matrix, we define a distance between two probability distributions (Rao's distance), geodesics, curvatures and other properties. Since then, many authors have been studying this subject, which is associated with various applications, such as: statistical inference, stochastic processes, information theory, and image distortion. In this work we provide a brief introduction to Differential and Riemannian Geometry and a survey of some results obtained in Information Geometry. We show Rao's distance between some probability distributions, with special atention to the study of such distance in the space of multivariate normal distributions. In this space, since closed forms for the distance and for the geodesic curve are not known yet, we focus on the calculus of bounds for Rao's distance. In some cases, we improve the upper bound provided by Calvo and Oller in 1990 / Mestrado / Matematica Aplicada / Mestra em Matemática Aplicada
102

La proposition 100% monnaie des années 1930 : clarification conceptuelle et analyse théorique / The 100% money proposal of the 1930s : conceptual clarification and theoretical analysis

Demeulemeester, Samuel 06 December 2019 (has links)
Cette thèse étudie la proposition 100% monnaie, telle qu’elle fut formulée aux États-Unis dans les années 1930 par Henry Simons (l’auteur principal du « Plan de Chicago »), Lauchlin Currie et Irving Fisher notamment. L’essence de cette proposition est de divorcer la création de monnaie des prêts de monnaie : les dépôts servant de moyens de paiement seraient soumis à 100% de réserve en monnaie légale, conférant à l’État un monopole de la création monétaire. Cette idée de réforme étant régulièrement sujette à confusion, nous entreprenons de clarifier son concept et d’étudier ses principaux arguments. Au chapitre 1, nous montrons que le 100% monnaie ne saurait être considéré comme un simple avatar des idées de la « Currency School » : contrairement à l’Acte de Peel de 1844, il ne contient en soi aucune règle d’émission, laissant ouvert le débat « règle ou discrétion ». Au chapitre 2, distinguant entre deux grandes approches du 100% monnaie, nous montrons que celui-ci n’implique nullement d’abolir l’intermédiation bancaire basée sur les dépôts d’épargne. Au chapitre 3, nous analysons, à travers les travaux de Fisher, l’objectif principal du 100% monnaie : celui de mettre fin au comportement procyclique du volume de monnaie, causé par le lien de dépendance entre création monétaire et prêts bancaires. Au chapitre 4, nous étudions un autre argument du 100% monnaie : celui de permettre une réduction de la dette publique, en rendant à l’État l’intégralité du seigneuriage – argument souvent critiqué, dont nous montrons qu’il n’est pourtant pas infondé. Alors que le 100% monnaie suscite un regain d’intérêt depuis la crise de 2008, il nous a paru fondamental de clarifier ces questions. / This thesis studies the 100% money proposal, such as it was formulated in the United States in the 1930s by Henry Simons (the main author of the “Chicago Plan”), Lauchlin Currie and Irving Fisher in particular. The essence of this proposal is to divorce the creation of money from the lending of money: deposits serving as means of payment would be subjected to 100% reserves in lawful money, awarding the state a monopoly over money creation. Because this reform idea is regularly subject to confusion, we endeavour to clarify its concept and study its main arguments. In chapter 1, we show that the 100% money proposal ought not to be viewed as a mere avatar of the “Currency School” ideas: contrary to Peel’s Act of 1844, it contains no issuing rule by itself, leaving open the debate “rule or discretion”. In chapter 2, distinguishing between two broad approaches to the 100% money proposal, we show that it does not imply abolishing bank intermediation based on savings deposits at all. In chapter 3, we analyse, through Fisher’s works, the main objective of the 100% money proposal: that of putting an end to the pro-cyclical behaviour of the volume of money, caused by the dependency relationship between money creation and bank loans. In chapter 4, we study another argument of the 100% money proposal: that of allowing a reduction of public debt, by returning the totality of seigniorage back to the state—an oft-criticised argument, which, as we show, is not unfounded however. While the 100% money proposal has been arousing renewed interest since the 2008 crisis, we thought it was fundamental to clarify these issues.
103

Applications de la théorie de l'information à l'apprentissage statistique / Applications of Information Theory to Machine Learning

Bensadon, Jérémy 02 February 2016 (has links)
On considère ici deux sujets différents, en utilisant des idées issues de la théorie de l'information : 1) Context Tree Weighting est un algorithme de compression de texte qui calcule exactement une prédiction Bayésienne qui considère tous les modèles markoviens visibles : on construit un "arbre de contextes", dont les nœuds profonds correspondent aux modèles complexes, et la prédiction est calculée récursivement à partir des feuilles. On étend cette idée à un contexte plus général qui comprend également l'estimation de densité et la régression, puis on montre qu'il est intéressant de remplacer les mixtures Bayésiennes par du "switch", ce qui revient à considérer a priori des suites de modèles plutôt que de simples modèles. 2) Information Geometric Optimization (IGO) est un cadre général permettant de décrire plusieurs algorithmes d'optimisation boîte noire, par exemple CMA-ES et xNES. On transforme le problème initial en un problème d'optimisation d'une fonction lisse sur une variété Riemannienne, ce qui permet d'obtenir une équation différentielle du premier ordre invariante par reparamétrage. En pratique, il faut discrétiser cette équation, et l'invariance n'est plus valable qu'au premier ordre. On définit l'algorithme IGO géodésique (GIGO), qui utilise la structure de variété Riemannienne mentionnée ci-dessus pour obtenir un algorithme totalement invariant par reparamétrage. Grâce au théorème de Noether, on obtient facilement une équation différentielle du premier ordre satisfaite par les géodésiques de la variété statistique des gaussiennes, ce qui permet d'implémenter GIGO. On montre enfin que xNES et GIGO sont différents dans le cas général, mais qu'il est possible de définir un nouvel algorithme presque invariant par reparamétrage, GIGO par blocs, qui correspond exactement à xNES dans le cas Gaussien. / We study two different topics, using insight from information theory in both cases: 1) Context Tree Weighting is a text compression algorithm that efficiently computes the Bayesian combination of all visible Markov models: we build a "context tree", with deeper nodes corresponding to more complex models, and the mixture is computed recursively, starting with the leaves. We extend this idea to a more general context, also encompassing density estimation and regression; and we investigate the benefits of replacing regular Bayesian inference with switch distributions, which put a prior on sequences of models instead of models. 2) Information Geometric Optimization (IGO) is a general framework for black box optimization that recovers several state of the art algorithms, such as CMA-ES and xNES. The initial problem is transferred to a Riemannian manifold, yielding parametrization-invariant first order differential equation. However, since in practice, time is discretized, this invariance only holds up to first order. We introduce the Geodesic IGO (GIGO) update, which uses this Riemannian manifold structure to define a fully parametrization invariant algorithm. Thanks to Noether's theorem, we obtain a first order differential equation satisfied by the geodesics of the statistical manifold of Gaussians, thus allowing to compute the corresponding GIGO update. Finally, we show that while GIGO and xNES are different in general, it is possible to define a new "almost parametrization-invariant" algorithm, Blockwise GIGO, that recovers xNES from abstract principles.
104

Real Estate Forecasting – An evaluation of forecasts / Prognoser på fastighetsmarknaden – Utvärdering av träffsäkerheten hos prognoser

Horttana, Jonas January 2013 (has links)
This degree project aims to explore the subject of forecasting, which is an ongoing and much alive debate within economics and finance. Within the forecasting field the available research is vast and even if restricted to real estate, which is the main focus of this paper, the available material is comprehensive. A large fraction of published research concerning the subject of real estate forecasting consists of post mortem studies, with econometric models trying to replicate historical trends with the help of available micro and macro data. This branch within the field of forecasting seems to advance and progress with help of refined econometric models. This paper, on the other hand, rather examines the fundamentals behind forecasting and why forecasting can be a difficult task in general. This is shown with an examination of the accuracy of 160 unique forecasts within the field of real estate. To evaluate the accuracy and predictability from different perspectives we state three main null hypotheses: 1. Correct forecasts and the direction of the predictions are independent variables. 2. Correct forecasts and the examined consultants are independent variables. 3. Correct forecasts and the examined cities are independent variables. 4 The observed frequencies for Hypothesis 1 indicate that upward predictions seem to be easier to predict than downward predictions. This is however not supported by the statistical tests. The observed frequencies for Hypothesis 2 clearly indicate that one consultant is a superior forecaster than compared to the other consultants. The statistical tests confirm this. The observed frequencies for Hypothesis 3 indicate no signs of dependence for the variables. The statistical tests confirm this. / Detta examensarbete ämnar att utforska ämnesområdet kring prognoser och prognosmakande, vilket är en högst levande debatt inom ekonomi och finans. Inom detta område är tillgänglig forskning mycket omfattande och även om materialet begränsas till fastighetsmarknaden, som är huvudspåret i denna uppsats, är mängden information ansenlig. En stor andel av publicerad forskning som berör prognoser av fastighetsmarkanden består ofta av studier av typen "post mortem", där man med ekonometriska modeller försöker efterlikna tidigare historiska trender med hjälp av tillgänglig mikro- eller makrodata. Denna gren av forskningen tycks vinna mark och fortsätter att utvecklas med hjälp av allt mer avancerade ekonometriska modeller. Denna studie fokuserar däremot snarare på de fundamentala elementen av prognosmakande och varför detta ibland kan vara en problematisk uppgift. Detta visas med hjälp av en undersökning gällande utfallet och träffsäkerheten av 160 unika prognoser på fastighetsmarknaden. 7 För att utvärdera träffsäkerheten hos prognoserna sätts tre olika nollhypoteser upp: 1. Korrekt prognos och riktning av prognos är oberoende variabler. 2. Korrekt prognos och konsult är oberoende variabler. 3. Korrekt prognos och undersökta städer är oberoende variabler. De observerade frekvenserna för Hypotes 1 indikerar att uppåtgående prognoser är enklare att förutspå än övriga prognoser. Detta kan dock inte stödjas av de statistiska testerna. De observerade frekvenserna för Hypotes 2 indikerar tydligt att en konsult är en överlägsen prognosmakare än övriga konsulter. Detta stöds av de statistiska testerna. De observerade frekvenserna för Hypotes 3 indikerar inget samband av beroende mellan variablerna. Detta kan dock inte stödjas av de statistiska testerna.
105

Optimering av lagernivåer vid distributionscentralen Bygg Ole / Optimization of inventory levels at the distribution central of Bygg Ole

Göransson, Gustav, Johnson, Mathias January 2016 (has links)
Detta examensarbetes syfte var att undersöka möjligheter till förbättring av hantering av lagernivåer för Bygg Ole Saltsjö-Boo. En kombination av aspekter från både systemteknik och industriell ekonomi har använts. I rapporten applicerades Guaranteed Service-Level modellen baserad på historisk försäljning i kombination relevanta teorier om lagerkostnad. Rapporten var begränsad till att behandla utvalda produkter med hög omsättning från två utvalda leverantörer till Bygg Ole. Efterfrågan för alla produkter i rapporten utom en är icke säsongsberoende. Särskild hänsyn har dessutom tagits till servicenivå, kapitalkostnader och variation i efterfråga. Resultatet gav att en implementering av modellen skulle ge lägre lagernivåer och därmed lägre lagerkostnader. Slutsatsen från rapporten var att modellen skulle kunna implementeras, eventuellt med höga administrativa kostnader i början. Bygg Ole har också en möjlighet att använda ett ordersystem baserat på den matematiska GSL-modellen (Guaranteed Service-Level) i kombination med prognoser över efterfrågan producerade av försäljningsavdelningen på Bygg Ole. Detta skulle potentiellt kunna öka precisionen i lagerhanteringen. Den nuvarande lagerräntan är relativt lågt bestämd och därför minskas de beräknade besparingarna från implementering av modellen. Om lagerräntan skulle vara högre skulle den ekonomiska fördelen med implementeringen vara tydligare. Rekommendationen till Bygg Ole är att tillämpa den rekommenderade GSL-modellen i kombination med ett system för prognos över efterfrågan på några utvalda produkter och sedan utvärdera resultatet. / The aim of this thesis was to examine possible improvements in the inventory management and procedure of ordering at Bygg Ole Saltsjö-Boo. A combination of aspects from both Systems Engineering and Industrial Engineering and Management has been used. In the report, a Guaranteed Service-Level model based on historical data of sales in combination with relevant theories about inventory carrying cost has been applied. The study was limited to specific chosen products with high sales from two selected suppliers of Bygg Ole. All these products in the study except one experienced low seasonal variety in demand. Furthermore special consideration was taken to service level, cost of capital and variability of demand. The result was that an implementation of the model would yield lower inventory levels and therefore lower carrying costs of inventory. The conclusion from the report was that the model could be implemented, although with possibly high administrative costs in the beginning. Bygg Ole also has a possibility of using an ordering system based on the mathematical GSL-model (Guaranteed Service-Level) in combination with forecasts of demand conducted by the sales department of Bygg Ole. This could potentially increase precision in the inventory management. The current inventory carrying charge is compounded relatively low and therefore decreases the calculated savings from implementing the model. If the carrying charge would be higher, the benefits of implementation would be more evident. The recommendation for Bygg Ole is to apply the recommended GSL-model in combination with a demand forecast planning system on a few selected products and then evaluate the result.
106

Measuring RocksDB performance and adaptive sampling for model estimation

Laprés-Chartrand, Jean 01 1900 (has links)
This thesis focuses on two topics, namely statistical learning and the prediction of key performance indicators in the performance evaluation of a storage engine. The part on statistical learning presents a novel algorithm adjusting the sampling size for the Monte Carlo approximation of the function to be minimized, allowing a reduction of the true function at a given probability and this, at a lower numerical cost. The sampling strategy is embedded in a trust-region algorithm, using the Fisher Information matrix, also called BHHH approximation, to approximate the Hessian matrix. The sampling strategy is tested on a logit model generated from synthetic data. Numerical results exhibit a significant reduction in the time required to optimize the model when an adequate smoothing is applied to the function. The key performance indicator prediction part describes a novel strategy to select better settings for RocksDB that optimize its throughput, using the log files to analyze and identify suboptimal parameters, opening the possibility to greatly accelerate modern storage engine tuning. / Ce mémoire s’intéresse à deux sujets, un relié à l’apprentisage statistique et le second à la prédiction d’indicateurs de performance dans un système de stockage de type clé-valeur. La partie sur l’apprentissage statistique développe un algorithme ajustant la taille d’échantillonnage pour l’approximation Monte Carlo de la fonction à minimiser, permettant une réduction de la véritable fonction avec une probabilité donnée, et ce à un coût numérique moindre. La stratégie d’échantillonnage est développée dans un contexte de région de confiance en utilisant la matrice d’information de Fisher, aussi appelée approximation BHHH de la matrice hessienne. La stratégie d’échantillonnage est testée sur un modèle logit généré à partir de données synthétiques suivant le même modèle. Les résultats numériques montrent une réduction siginificative du temps requis pour optimiser le modèle lorsqu’un lissage adéquat est appliqué. La partie de prédiction d’indicateurs de performance décrit une nouvelle approche pour optimiser la vitesse maximale d’insertion de paire clé-valeur dans le système de stockage RocksDB. Les fichiers journaux sont utilisés pour identifier les paramètres sous-optimaux du système et accélérer la recherche de paramètres optimaux.
107

Modèles de mélange de von Mises-Fisher / Von Mises-Fisher mixture models

Parr Bouberima, Wafia 15 November 2013 (has links)
Dans la vie actuelle, les données directionnelles sont présentes dans la majorité des domaines, sous plusieurs formes, différents aspects et de grandes tailles/dimensions, d'où le besoin de méthodes d'étude efficaces des problématiques posées dans ce domaine. Pour aborder le problème de la classification automatique, l'approche probabiliste est devenue une approche classique, reposant sur l'idée simple : étant donné que les g classes sont différentes entre elles, on suppose que chacune suit une loi de probabilité connue, dont les paramètres sont en général différents d'une classe à une autre; on parle alors de modèle de mélange de lois de probabilités. Sous cette hypothèse, les données initiales sont considérées comme un échantillon d'une variable aléatoire d-dimensionnelle dont la densité est un mélange de g distributions de probabilités spécifiques à chaque classe. Dans cette thèse nous nous sommes intéressés à la classification automatique de données directionnelles, en utilisant des méthodes de classification les mieux adaptées sous deux approches: géométrique et probabiliste. Dans la première, en explorant et comparant des algorithmes de type kmeans; dans la seconde, en s'attaquant directement à l'estimation des paramètres à partir desquels se déduit une partition à travers la maximisation de la log-vraisemblance, représentée par l'algorithme EM. Pour cette dernière approche, nous avons repris le modèle de mélange de distributions de von Mises-Fisher, nous avons proposé des variantes de l'algorithme EMvMF, soit CEMvMF, le SEMvMF et le SAEMvMF, dans le même contexte, nous avons traité le problème de recherche du nombre de composants et le choix du modèle de mélange, ceci en utilisant quelques critères d'information : Bic, Aic, Aic3, Aic4, Aicc, Aicu, Caic, Clc, Icl-Bic, Ll, Icl, Awe. Nous terminons notre étude par une comparaison du modèle vMF avec un modèle exponentiel plus simple ; à l'origine ce modèle part du principe que l'ensemble des données est distribué sur une hypersphère de rayon ρ prédéfini, supérieur ou égal à un. Nous proposons une amélioration du modèle exponentiel qui sera basé sur une étape estimation du rayon ρ au cours de l'algorithme NEM. Ceci nous a permis dans la plupart de nos applications de trouver de meilleurs résultats; en proposant de nouvelles variantes de l'algorithme NEM qui sont le NEMρ , NCEMρ et le NSEMρ. L'expérimentation des algorithmes proposés dans ce travail a été faite sur une variété de données textuelles, de données génétiques et de données simulées suivant le modèle de von Mises-Fisher (vMF). Ces applications nous ont permis une meilleure compréhension des différentes approches étudiées le long de cette thèse. / In contemporary life directional data are present in most areas, in several forms, aspects and large sizes / dimensions; hence the need for effective methods of studying the existing problems in these fields. To solve the problem of clustering, the probabilistic approach has become a classic approach, based on the simple idea: since the g classes are different from each other, it is assumed that each class follows a distribution of probability, whose parameters are generally different from one class to another. We are concerned here with mixture modelling. Under this assumption, the initial data are considered as a sample of a d-dimensional random variable whose density is a mixture of g distributions of probability where each one is specific to a class. In this thesis we are interested in the clustering of directional data that has been treated using known classification methods which are the most appropriate for this case. In which both approaches the geometric and the probabilistic one have been considered. In the first, some kmeans like algorithms have been explored and considered. In the second, by directly handling the estimation of parameters from which is deduced the partition maximizing the log-likelihood, this approach is represented by the EM algorithm. For the latter approach, model mixtures of distributions of von Mises-Fisher have been used, proposing variants of the EM algorithm: EMvMF, the CEMvMF, the SEMvMF and the SAEMvMF. In the same context, the problem of finding the number of the components in the mixture and the choice of the model, using some information criteria {Bic, Aic, Aic3, Aic4, AICC, AICU, CAIC, Clc, Icl-Bic, LI, Icl, Awe} have been discussed. The study concludes with a comparison of the used vMF model with a simpler exponential model. In the latter, it is assumed that all data are distributed on a hypersphere of a predetermined radius greater than one, instead of a unit hypersphere in the case of the vMF model. An improvement of this method based on the estimation step of the radius in the algorithm NEMρ has been proposed: this allowed us in most of our applications to find the best partitions; we have developed also the NCEMρ and NSEMρ algorithms. The algorithms proposed in this work were performed on a variety of textual data, genetic data and simulated data according to the vMF model; these applications gave us a better understanding of the different studied approaches throughout this thesis.
108

Le statisticien neuronal : comment la perspective bayésienne peut enrichir les neurosciences / The neuronal statistician : how the Bayesian perspective can enrich neuroscience

Dehaene, Guillaume 09 September 2016 (has links)
L'inférence bayésienne répond aux questions clés de la perception, comme par exemple : "Que faut-il que je crois étant donné ce que j'ai perçu ?". Elle est donc par conséquent une riche source de modèles pour les sciences cognitives et les neurosciences (Knill et Richards, 1996). Cette thèse de doctorat explore deux modèles bayésiens. Dans le premier, nous explorons un problème de codage efficace, et répondons à la question de comment représenter au mieux une information probabiliste dans des neurones pas parfaitement fiables. Nous innovons par rapport à l'état de l'art en modélisant une information d'entrée finie dans notre modèle. Nous explorons ensuite un nouveau modèle d'observateur optimal pour la localisation d'une source sonore grâce à l’écart temporel interaural, alors que les modèles actuels sont purement phénoménologiques. Enfin, nous explorons les propriétés de l'algorithme d'inférence approximée "Expectation Propagation", qui est très prometteur à la fois pour des applications en apprentissage automatique et pour la modélisation de populations neuronales, mais qui est aussi actuellement très mal compris. / Bayesian inference answers key questions of perception such as: "What should I believe given what I have perceived ?". As such, it is a rich source of models for cognitive science and neuroscience (Knill and Richards, 1996). This PhD manuscript explores two such models. We first investigate an efficient coding problem, asking the question of how to best represent probabilistic information in unrealiable neurons. We innovate compared to older such models by introducing limited input information in our own. We then explore a brand new ideal observer model of localization of sounds using the Interaural Time Difference cue, when current models are purely descriptive models of the electrophysiology. Finally, we explore the properties of the Expectation Propagation approximate-inference algorithm, which offers great potential for both practical machine-learning applications and neuronal population models, but is currently very poorly understood.
109

"It's alive!" : Hur Frankensteinberättelsen förändrats från Mary Shelleys originaltext till Mary Shelley’s Frankenstein genom tre andra filmatiseringar / "It's alive!" : How the story of Frankenstein has changed from Mary Shelley’s original to Mary Shelley’s Frankenstein through three other movie adaptations

Thonander Lindalen, Simon January 2023 (has links)
Den här uppsatsen jämför Mary Shelleys Frankenstein: eller den moderna Prometeus med Kenneth Branaghs film Mary Shelley’s Frankenstein från 1994. Uppsatsens syfte är att se hur vissa av förändringarna från text till film kan spåras till tidigare Frankensteinadaptioner, specifikt Frankenstein och Bride of Frankenstein (1931 respektive 1935) av James Whale, och The Curse of Frankenstein (1957) av Terence Fisher. Undersökningen visar att vissa förändringar som gjorts i tidigare filmer på grund av filmernas samtid och omgivning har blivit en del av den allmänna bilden av Frankensteinberättelsen, och på så vis lever kvar även i senare filmatiseringar. Slutsatsen dras att Mary Shelley’s Frankenstein inte bara är en adaption av Mary Shelleys text, utan kan även ses som en adaption av tidigare filmskapares verk. / This essay compares Mary Shelley’s Frankenstein; or, The Modern Prometheus with Kenneth Branagh’s 1994 film Mary Shelley’s Frankenstein. The purpose of the essay is to examine how some of the changes from text to film can be traced to earlier Frankenstein adaptations, specifically Frankenstein and Bride of Frankenstein (1931 and 1935, respectively) by James Whale, and The Curse of Frankenstein (1957) by Terence Fisher. The research shows that certain changes made in earlier films due to the films’ time and place in history have become part of the general image of the Frankenstein story, and thus survive even in later film adaptations. The conclusion is drawn that Mary Shelley’s Frankenstein is not only an adaptation of Mary Shelley’s text, but can also be seen as an adaptation of previous filmmakers’ works.
110

Treatment heterogeneity and potential outcomes in linear mixed effects models

Richardson, Troy E. January 1900 (has links)
Doctor of Philosophy / Department of Statistics / Gary L. Gadbury / Studies commonly focus on estimating a mean treatment effect in a population. However, in some applications the variability of treatment effects across individual units may help to characterize the overall effect of a treatment across the population. Consider a set of treatments, {T,C}, where T denotes some treatment that might be applied to an experimental unit and C denotes a control. For each of N experimental units, the duplet {r[subscript]i, r[subscript]Ci}, i=1,2,…,N, represents the potential response of the i[superscript]th experimental unit if treatment were applied and the response of the experimental unit if control were applied, respectively. The causal effect of T compared to C is the difference between the two potential responses, r[subscript]Ti- r[subscript]Ci. Much work has been done to elucidate the statistical properties of a causal effect, given a set of particular assumptions. Gadbury and others have reported on this for some simple designs and primarily focused on finite population randomization based inference. When designs become more complicated, the randomization based approach becomes increasingly difficult. Since linear mixed effects models are particularly useful for modeling data from complex designs, their role in modeling treatment heterogeneity is investigated. It is shown that an individual treatment effect can be conceptualized as a linear combination of fixed treatment effects and random effects. The random effects are assumed to have variance components specified in a mixed effects “potential outcomes” model when both potential outcomes, r[subscript]T,r[subscript]C, are variables in the model. The variance of the individual causal effect is used to quantify treatment heterogeneity. Post treatment assignment, however, only one of the two potential outcomes is observable for a unit. It is then shown that the variance component for treatment heterogeneity becomes non-estimable in an analysis of observed data. Furthermore, estimable variance components in the observed data model are demonstrated to arise from linear combinations of the non-estimable variance components in the potential outcomes model. Mixed effects models are considered in context of a particular design in an effort to illuminate the loss of information incurred when moving from a potential outcomes framework to an observed data analysis.

Page generated in 0.0653 seconds