21 |
The Agnostic's Response to Climate Deniers: Price Carbon!van der Ploeg, Frederick, Rezai, Armon 09 1900 (has links) (PDF)
With the election of President Trump, climate deniers feel emboldened and moved from the fringes to the centre of global policy making. We study how an agnostic approach to policy, based on Pascal's wager and allowing for subjective prior probability beliefs about whether climate deniers are right, prices carbon. Using the DICE integrated assessment model, we find that assigning a 10% chance of climate deniers being correct lowers the global price on carbon in 2020 only marginally: from $21 to $19 per ton of carbon dioxide if policymakers apply "Nordhaus discounting" and from $91 to $84 per ton of carbon dioxide if they apply "Stern discounting". Agnostics' reflection of remaining scientific uncertainty leaves climate policy essentially unchanged. The robustness of an ambitious climate policy also follows from using the max-min or the min-max regret principle. Letting the coefficient of relative ambiguity aversion vary from zero, corresponding to expected utility analysis, to infinity, corresponding to the max-min principle, we show how policy makers deal with fundamental climate model uncertainty if they are prepared to assign prior probabilities to different views of the world being correct. Allowing for an ethical discount rate and a higher market discount rate and for a wide range of sensitivity exercises including damage uncertainty, we show that pricing carbon is the robust response under rising climate scepticism. / Series: Ecological Economic Papers
|
22 |
Matching of geometrically and topologically changing meshesJonsson, Kristoffer January 2015 (has links)
The aim for this thesis is to develop a foundation for a compression system for animated mesh sequences, specifically under dynamic change of mesh geometry and topology. Compression of mesh sequences is of special interest in the game industry and this particular thesis is a part of an ongoing series of projects at EA DICE. One of the primary challenges when creating a mesh compression system is creating a matching bijective subset of the mesh surfaces between two subsequent frames in the animation to guide remeshing of the sequence. This thesis describes a method for producing a bijective set of matching mesh patches between two meshes along with an error metric that captures the quality of the matching in terms of shape similarity and distortion. Theory of mathematical topology and tensor algebra used in methods for high performance scientific digital 3D-image recognition are here adopted to extract similar local features between meshes. Techniques for creating parametrizations of mesh patches are combined with techniques for matching point clouds and deforming mesh geometry under energy minimization in order to produce a matching set of patches. The presented algorithm successfully creates bijective sets of matched patches for subsequent meshes in a sequence as well as measures the error for the matchings. Results show an average matching set size of approximately 25% of the mesh areas over a sequence of meshes. This suggests that the data size of such a sequence could potentially be reduced by 25%.
|
23 |
Tabletop game player experience in the age of digitization : Social and material aspects of playTjernberg, Wilmer January 2021 (has links)
This thesis explores physical and social aspects of playing tabletop games physically versus remotely. It also examines the experiences of contemporary players of tabletop games, with focus placed on play during the COVID-19 pandemic. The report begins with an explanation of tabletop games, including social and material aspects as examined in previous work. To explore the thesis’ problem area, several tabletop game players were interviewed, resulting in a number of recurring themes. The interview results suggest that social rituals and material aspects of tabletop games are highly important to players. This has implications for the future of tabletop games, many of which are discussed in the text.
|
24 |
Pre-planning of Individualized Ankle Implants Based on Computed Tomography - Automated Segmentation and Optimization of Acquisition Parameters / Operationsplanering av individuella fotledsimplantat baserat på datortomografi- Automatiserad segmentering och optimering av datortomografibilderEngström Messén, Matilda, Moser, Elvira January 2021 (has links)
The structure of the ankle joint complex creates an ideal balance between mobility and stability, which enables gait. If a lesion emerges in the ankle joint complex, the anatomical structure is altered, which may disturb mobility and stability and cause intense pain. A lesion in the articular cartilage on the talus bone, or a lesion in the subchondral bone of the talar dome, is referred to as an Osteochondral Lesion of the Talus (OLT). Replacing the damaged cartilage or bone with an implant is one of the methods that can be applied to treat OLTs. Episurf Medical develops and produces patient-specific implants (Episealers) along with the necessary associated surgical instruments by, inter alia, creating a corresponding 3D model of the ankle (talus, tibial, and fibula bones) based on either a Magnetic Resonance Imaging (MRI) scan or a Computed Tomography (CT) scan. Presently, the3D models based on MRI scans can be created automatically, but the 3Dmodels based on CT scans must be created manually, which can be very time-demanding. In this thesis project, a U-net based Convolutional Neural Network (CNN) was trained to automatically segment 3D models of ankles based on CT images. Furthermore, in order to optimize the quality of the incoming CT images, this thesis project also consisted of an evaluation of the specified parameters in the Episurf CT talus protocol that is being sent out to the clinics. The performance of the CNN was evaluated using the Dice Coefficient (DC) with five-fold cross-validation. The CNN achieved a mean DC of 0.978±0.009 for the talus bone, 0.779±0.174 for the tibial bone, and 0.938±0.091 for the fibula bone. The values for the talus and fibula bones were satisfactory and comparable to results presented in previous researches; however, due to background artefacts in the images, the DC achieved by the network for the segmentation of the tibial bone was lower than the results presented in previous researches. To correct this, a noise-reducing filter will be implemented. / Fotledens komplexa anatomi ger upphov till en ideal balans mellan rörlighetoch stabilitet, vilket i sin tur möjliggör gång. Fotledens anatomi förändras när en skada uppstår, vilket kan påverka rörligheten och stabiliteten samt orsaka intensiv smärta. En skada i talusbenets ledbrosk eller i det subkondrala benet på talusdomen benämns som en Osteochondral Lesion of the Talus(OLT). En metod att behandla OLTs är att ersätta den del brosk eller bensom är skadat med ett implantat. Episurf Medical utvecklar och producerar individanpassade implantat (Episealers) och tillhörande nödvändiga kirurgiska instrument genom att, bland annat, skapa en motsvarande 3D-modell av fotleden (talus-, tibia- och fibula-benen) baserat på en skanning med antingen magnetisk resonanstomografi (MRI) eller datortomografi (CT). I dagsläget kan de 3D-modeller som baseras på MRI-skanningar skapas automatiskt, medan de 3D-modeller som baseras på CT-skanningar måste skapas manuellt - det senare ofta tidskrävande. I detta examensarbete har ett U-net-baserat Convolutional Neuralt Nätverk (CNN) tränats för att automatiskt kunna segmentera 3D-modeller av fotleder baserat på CT-bilder. Vidare har de speciferade parametrarna i Episurfs CT-protokoll för fotleden som skickas ut till klinikerna utvärderats, detta för att optimera bildkvaliteten på de CT-bilder som används för implantatspositionering och design. Det tränade nätverkets prestanda utvärderades med hjälp av Dicekoefficienten (DC) med en fem-delad korsvalidering. Nätverket åstadkom engenomsnittlig DC på 0.978±0.009 för talusbenet, 0.779±0.174 för tibiabenet, och 0.938±0.091 för fibulabenet. Värdena för talus och fibula var adekvata och jämförbara med resultaten presenterade i tidigare forskning. På grund av bakgrundsartefakter i bilderna blev den DC som nätverket åstadkom för sin segmentering av tibiabenet lägre än tidigiare forskningsresultat. För att korrigera för bakgrundsartefakterna kommer ett brusreduceringsfilter implementeras
|
25 |
Managing Climate Overshoot Risk with Reinforcement Learning : Carbon Dioxide Removal, Tipping Points and Risk-constrained RL / Hantering av risk vid överskjutning av klimatmål med förstärkande inlärning : Koldioxidinfångning, tröskelpunkter och riskbegränsad förstärkande inlärningKerakos, Emil January 2024 (has links)
In order to study how to reach different climate targets, scientists and policymakers rely on results from computer models known as Integrated Assessment Models (IAMs). These models are used to quantitatively study different ways of achieving warming targets such as the Paris goal of limiting warming to 1.5-2.0 °C, deriving climate mitigation pathways that are optimal in some sense. However, when applied to the Paris goal many IAMs derive pathways that overshoot the temperature target: global temperature temporarily exceeds the warming target for a period of time, before decreasing and stabilizing at the target. Although little is known with certainty about the impacts of overshooting, recent studies indicate that there may be major risks entailed. This thesis explores two different ways of including overshoot risk in a simple IAM by introducing stochastic elements to it. Then, algorithms from Reinforcement Learning (RL) are applied to the model in order to find pathways that take overshoot risk into consideration. In one experiment we apply standard risk-neutral RL to the DICE model extended with a probabilistic damage function and carbon dioxide removal technologies. In the other experiment, the model is further augmented with a probabilistic tipping element model. Using risk-constrained RL we then train an algorithm to optimally control this model, whilst controlling the conditional-value-at-risk of triggering tipping elements below a user-specified threshold. Although some instability and convergence issues are present during training, in both experiments the agents are able to achieve policies that outperform a simple baseline. Furthermore, the risk-constrained agent is also able to (approximately) control the tipping risk metric below a desired threshold in the second experiment. The final policies are analysed for domain insights, indicating that carbon removal via temporal carbon storage solutions could be a sizeable contributor to negative emissions on a time-horizon relevant for overshooting. In the end, recommended next steps for future work are discussed. / För att studera hur globala klimatmål kan nås använder forskare och beslutsfattare resultat från integrerade bedömningsmodeller (IAM:er). Dessa modeller används för att kvantitativt förstå olika vägar till temperaturmål, så som Parisavtalets mål om att begränsa den globala uppvärmningen till 1.5-2.0 °C. Resultaten från dessa modeller är så kallade ”mitigation pathways” som är optimala utifrån något uppsatt kriterium. När sådana modellkörningar görs med Parismålet erhålls dock ofta optimala pathways som överskjuter temperaturmålet tillfälligt: den globala temperaturen överstiger målet i en period innan den sjunker och till slut stabiliseras vid det satta målet. Kunskapen om vilken påverkan en överskjutning har är idag begränsad, men flertalet nyligen gjorda studier indikerar att stora risker potentiellt kan medföras. I denna uppsats utforskas två olika sätt att inkludera överskjutningsrisk i en enkel IAM genom användandet av stokastiska element. Därefter används Förstärkande Inlärning på modellen för att erhålla modellösningar som tar hänsyn till överkjutningsrisk. I ett av experimenten utökas IAM:en med en stokastisk skadefunktion och tekniker för koldioxidinfångning varpå vanlig Förstärkande Inlärning appliceras. I det andra experimentet utökas modellen ytterligare med en stokastisk modell för tröskelpunkter. Med hjälp av risk-begränsad Förstärkande Inlärning tränas därefter en modell för att optimalt kontrollera denna IAM samtidigt som risken att utlösa tröskelpunkter kontrolleras till en nivå satt av användaren. Även om en viss grad av instabilitet och problem med konvergens observeras under inlärningsprocessen så lyckas agenterna i båda experimenten hitta beslutsregler som överträffar en enkel baslinje. Vidare lyckas beslutsregeln som erhålls i det andra experimentet, med den risk-begränsade inlärningen, approximativt kontrollera risken att utlösa tröskelpunkter till det specificerade värdet. Efter träning analyseras de bästa beslutsreglerna i syfte att finna domänmässiga insikter, varav en av dessa insikter är att temporära kollager kan ge betydande bidrag för koldioxidinfångning i en tidshorisont relevant vid överskjutning. Slutligen diskuteras möjliga nästa steg för framtida arbeten inom området.
|
26 |
Les tableaux homonymiques, principe d’unité du Cornet à dés de Max JacobDahan, Marianne 05 1900 (has links)
Ce mémoire s’intéresse à la structure du recueil de poésie en prose Le Cornet à dés (1917) de Max Jacob. À la lecture de l’ensemble, on remarque qu’il est sans cesse question de choses ou d’évènements auxquels renvoient les diverses significations du mot tableau : œuvres picturales, descriptions imagées, cadres de fenêtre ou de porte, vieillards (vieux tableaux), tableaux vivants, subdivisions de pièces de théâtre ou encore tableaux d’école. Subdivisé en trois chapitres, ce travail s’attachera dans un premier temps au fait que tous ces homonymes sont traités, dans les poèmes, comme des peintures. Entre fixité et mouvement, les descriptions et les narrations rapprochent la littérature de l’art pictural, ce qui contribue à l’esthétique du doute caractéristique de l’œuvre de Max Jacob. Le deuxième chapitre s’intéresse aux procédés de reprise et à la manière dont ils permettent de faire des liens entre les poèmes. À partir des théories du mouvement et de la répétition, nous verrons comment les divers motifs forment, à la manière des dés, différentes combinaisons d’une pièce à l’autre. Inspiré par les peintres cubistes qui présentent simultanément tous les angles d’un même objet, l’auteur fait le tour du mot tableau. Dans le dernier chapitre, il ressort que la juxtaposition des poèmes donne accès à un surcroît de signification : certains éléments arbitraires comme des titres obscurs prennent soudainement sens. Une réflexion sur la lecture vient compléter ce travail puisque les nombreuses répétitions sont traitées dans la mémoire. Ce travail s’inscrit dans le champ des études sur le recueil et s’appuie principalement sur l’analyse de poèmes. / This dissertation treats the structure of Le Cornet à dés (1917), a collection of prose poems written by Max Jacob. Upon reading this collection, one notices that things and events referring to the different definitions of the word “tableau” are repeatedly employed : paintings, visual descriptions, window and door frames, elders (vieux tableaux), living pictures (tableaux vivants), theater scenes and also blackboards. This dissertation, divided into three chapters, starts by exploring how these homonyms are employed as paintings in the poems. In-between fixity and movement, the descriptions and the narrations bring literature closer to pictorial art. This contributes to the aesthetic of doubt found in Max Jacob’s written work. The second chapter analyzes different kinds of repetitions and the way they build links between the poems. By employing the theories of movement and repetition, this dissertation demonstrates how the various motifs, similarly to a pair of dice, form different combinations from one poem to another. Inspired by the cubist painters who simultaneously show all the angles of an object, Jacob thoroughly examines the word “tableau”. In the last chapter, it becomes evident that the juxtaposition of the poems gives access to additional significance: certain arbitrary elements, such as obscure titles, suddenly make sense. A reflection on the act of reading concludes this dissertation, since the numerous repetitions are stored in the reader’s memory. This work falls within the field of collection studies and mainly relies on poetry analysis.
|
27 |
O co-relato Mallarmé / Haroldo de Campos: o mito moderno em "Um lance de dados" / The correlation Mallarmé / Haroldo de Campos: the modern myth in A throw of the diceBento, Sérgio Guilherme Cabral 10 September 2008 (has links)
Made available in DSpace on 2016-04-28T19:59:12Z (GMT). No. of bitstreams: 1
Sergio Guilherme Cabral Bento.pdf: 727345 bytes, checksum: e8ab9d50ad915b3b70edc88e01e905dc (MD5)
Previous issue date: 2008-09-10 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / In its modern concept, the myth is a behavioral paradigm, a symbolical model of an external reference. Based on that, this paper defends that the poem A throw of the dice , by Stéphane Mallarmé, acquires such mythical status, either by its cosmogonical nature which is commonly ignored by the critics -, or by its formalistic innovations reason why it got so acclaimed. Due to that, it suffered during the XX century a mythification process, in which it was promoted to be elected as one of the inspirational sources of recent and contemporary poetry. So that such approach was possible, the study was delimitated to a comparison between the proposed text to its re-creation in Portuguese, done by Haroldo de Campos. This dialogue translation / original not only updates the myth A throw of the dice by the ritualistic value the translation process has, but also allows the analysis to get closer to the current times. As an instrument for this exegesis, the Gestalt theories principles of form organization; concept of whole , which is not a mere addition of its parts, but has a unique quality aggregated; and the phenomenon of psiconeural correlation in human visual perception ensure that the poem will be considered in its totality, as a verbal, visual and sound entity. In short, it has been concluded that the poem A throw of the dice is a tale of the (re-) creation of the Universe, the human being and the Art, not under the guidance of a divine power, but generated by the human thinking, key factor in the new illuminist bourgeois society in Modern Age. The modern myth is formed / Em seu conceito moderno, o mito é um paradigma comportamental, um sistema semiológico de algum referente externo. Baseado nisto, este trabalho sustenta que o poema Um lance de dados , de Stéphane Mallarmé, adquire status mítico, quer pelo seu caráter cosmogônico comumente ignorado pela crítica quer pela sua inovação formal fato que o consagrou, e sob cujo prisma é unicamente lembrado. Em virtude disso, sofreu ao longo do século XX um processo de mitificação ao ser promovido à condição de uma das mais importantes fontes de inspiração da poesia recente e contemporânea. Para que tal abordagem fosse possível, buscou-se delimitar o estudo do texto proposto em correlação com sua recriação em língua portuguesa, feita por Haroldo de Campos. Tal diálogo tradução/original não apenas atualiza o mito Um lance de dados pelo valor ritualístico que possui o ato de traduzir, mas também permite à análise uma aproximação da contemporaneidade. Como instrumento de exegese, as teorias da Gestalt princípios de organização da forma; máxima de que o todo não é a mera soma das partes, mas possui uma qualidade diferenciada destas; e o fenômeno da correlação psiconeural na percepção visual humana asseguraram que a obra fosse considerada em sua totalidade, enquanto entidade visual, verbal e sonora. Deste modo, chegou-se à conclusão que Um lance de dados é um relato da (re-) criação do Universo, do ser humano e da Arte não sob a condução de uma força divina, mas gerada pelo pensamento humano, novo fator-chave na sociedade iluminista-burguesa da Modernidade. Está formado o mito moderno
|
28 |
Calibration of Wide Field Imagers - The SkyDICE ProjectRocci, P.-F. 04 November 2013 (has links) (PDF)
La cosmologie est maintenant entré dans une ère de mesures de précision, et l'objectif des observations est maintenant la chasse aux contradictions au sein du Modèle Cosmologique. La mesure des distances de luminosité de SNe Ia en fonction de leur décalage vers le rouge a permis de découvrir l'accélération de l'expansion cosmique. Aujourd'hui, les SNe-Ia sont encore la sonde la plus sensible à w, l'équation d'état de l'énergie noire, et le nombre croissant de SNe-Ia sont détectés et étudiés par plusieurs collaborations partout dans le monde, afin d'affiner la mesure du valeur de w. La précision sur w est maintenant aussi bas que 7%, avec près de 1000 SNe-Ia dans le diagramme de Hubble. Malheureusement, la mesure est désormais dominé par les incertitudes systématiques, la principale source de la systématique en étant l'étalonnage photométrique des imageurs utilisés pour mesurer le flux des SNe Ia. Ce travail de thèse a pour sujet l'étalonnage photométrique. Pour améliorer les résultats actuels, les astronomes n'ont pas d'autre choix que de revoir les systèmes d'étalonnage anciens. Depuis 2005, les collaborations sur l'énergie noire ont lancé des efforts d'étalonnage ambitieux, redéfini les standards primaires et la métrologie entre ces standards et leurs images scientifiques pour pousser le budget d'erreur bien inférieure à 1%. Depuis 2008, le groupe de Cosmologie de l'LPNHE a été impliqué dans la construction d'un système d'étalonnage spectrophotométrique pour la dernière génération des imageurs grand-champ. En particulier, l'équipe a conçu et construit SkyDICE (SkyMapper Direct Illumination Calibration Experiment), installé dans le dôme du télescope SkyMapper (Observatoire Siding Springs, Australie). Dans ce projet nous avons montré qu'il est possible de construire une source lumineuse à base des LEDs qui échantillonnent uniformément toute la gamme des longueur d'ondes visible du télescope SkyMapper. La stabilité de la source est remarquable, allant de quelques 10-4 pour la majorité des LEDs, à 10−3 pour les canaux les moins stables. J'ai détaillé l'étalonnage spectrophotométrique de l'appareil sur notre banc de test au LPNHE. Plus important encore, j'ai montré qu'il est possible de construire un modèle spectrophotométrique de chaque LED, qui peut prédire le spectre des LEDs à n'importe quelle température T. Chacun de ces modèles est livré avec un budget d'incertitude que représente (1)-le nombre limité de mesures spectroscopiques et photométriques et (2)-les incertitudes du banc de test. Enfin, j'ai décrit une méthode pour calibrer les bandes passantes effectives de l'imageur, et de surveiller les filtre avec des sériés d'images d'étalonnage prises avec SkyDICE. Cette méthode prend en compte toutes les incertitudes du banc d'essai et le propage aussi exactement que possible. Le méthode est actuellement appliqué à l'ensemble de données réelles de SkyDICE, et ce qui a été présenté ici est un ensemble de tests effectués sur des ensembles de données simulées. Un résultat important de ce travail est que, malgré le fait que les LEDs ne sont pas des sources monochromatiques, nous sommes en mesure de contrôler la position des fronts de filtre avec une précision bien inférieure à 1-nm. En ce qui concerne la bande passante étalonné, nous avons calculé les incertitudes affectant nos estimations sur la normalisation de la bande passante, par rapport à la bande r. Dans le meilleur scénario, où les incertitudes sont tous corrélés positivement, nous avons montré que, après quelques analyses d'étalonnage, nous nous attelons à une précision d'environ 0,4% dans les bandes u et v et d'environ 0,3% dans les autres bandes. L'analyse de l'ensemble de données des SkyDICE est toujours en cours et le premier contraintes seront publiés bientôt.
|
29 |
Carbon dioxide emission pathways avoiding dangerous ocean impactsKvale, Karin 17 January 2009 (has links)
Radiative forcing by increased atmospheric levels of greenhouse gases (GHGs) produced by human activities could lead to strongly undesirable effects on oceans and their dependent human systems in the coming centuries. Such dangerous anthropogenic interference with the climate system is a possibility the UN Framework Convention on Climate Change (UNFCCC) calls on nations to avoid. Unacceptable consequences of such interference could include inundation of coastal areas and low-lying islands by rising sea level, the rate of which could exceed natural and human ability to adapt, and ocean acidification contributing to widespread disruption of marine and human food systems. Such consequences pose daunting socioeconomic costs, for developing nations in particular.
Drawing on existing literature, we define example levels of acceptable global marine change in terms of global mean temperature rise, sea level rise and ocean acidification. A global-mean climate model (ACC2), is implemented in an optimizing environment, GAMS, and coupled to an economic model (DICE). Using cost-effectiveness analysis and the tolerable windows approach (TWA) allows for the computation of both economically optimal carbon dioxide emissions pathways as well as a range in carbon dioxide emissions (the so-called ``emissions corridor'') which respect the predetermined ceilings and take into account the socio-economically acceptable pace of emissions reductions.
The German Advisory Council on Global Change (WBGU) has issued several guardrails focused on marine changes, of which we find the rate and absolute rise in global mean temperature to be the most restrictive (0.2 degrees Celsius per decade, 2 degrees Celsius total). Respecting these guardrails will require large reductions in both carbon and non-carbon GHGs over the next century, regardless of equilibrium climate sensitivity. WBGU sea level rise and rate of rise guardrails (1 meter absolute, 5 cm per decade) are substantially less restrictive, and respecting them does not require deviation from a business-as-usual path in the next couple hundred of years, provided common assumptions of Antarctic ice mass balance sensitivity are correct. The ocean acidification guardrail (0.2 unit decline relative to the pre-industrial value) is less restrictive than those for temperature, but does require emissions reductions into the coming century.
|
30 |
Evoluční návrh využívající gramatickou evoluci / Evolutionary Design Using Grammatical EvolutionRepík, Tomáš January 2017 (has links)
p, li { white-space: pre-wrap; } Evoluce v přírodě slouží jako zdroj inspirace pro tuto práci . Základní myšlenkou je využití generativní síly gramatik v kombinaci s evolučním přístupem . Nabyté znalosti jsou aplikovány na hledání strategií chování v rozmanitých prostředích . Stromy chování jsou modelem , který bývá běžně použit na řízení rozhodování různých umělých inteligencí . Tato práce se zabývá hledáním stromů chování , které budou řídit jedince řešící nasledující dva problémy : upravenou verzi problému cesty koněm šachovnicí a hraní hry Pirátské kostky . Při hledání strategie hráče kostek , byla použita konkurenční koevoluce . Důvodem je obtížnost návrhu spravedlivé fitness funkce hodnotící výkony hráčů .
|
Page generated in 0.019 seconds