• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 11
  • 4
  • 3
  • 1
  • 1
  • 1
  • Tagged with
  • 21
  • 21
  • 8
  • 5
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

An Instruction Scratchpad Memory Allocation for the Precision Timed Architecture

Prakash, Aayush 11 December 2012 (has links)
This work presents a static instruction allocation scheme for the precision timed architecture’s (PRET) scratchpad memory. Since PRET provides timing instructions to control the temporal execution of programs, the objective of the allocation scheme is to ensure that the explicitly specified temporal requirements are met. Furthermore, this allocation incorporates instructions from multiple hardware threads of the PRET architecture. We formulate the allocation as an integer-linear programming problem, and we implement a tool that takes binaries, constructs a control-flow graph, performs the allocation, rewrites the binary with the new allocation, and generates an output binary for the PRET architecture. We carry out experiments on a modified version of the Malardalen benchmarks to illustrate that commonly known ACET and WCET based approaches cannot be directly applied to meet explicit timing requirements. We also show the advantage of performing the allocation across multiple threads. We present a real time benchmark controlling an Unmanned Air Vehicle as the case study.
12

Nástroj pro analýzu výkonu alokátorů paměti v operačním systému Linux / A Tool for Analyzing Performance of Memory Allocators in Linux

Müller, Petr January 2010 (has links)
This diploma thesis presents a tool for dynamic memory allocator analysis, focused on their performance. The work identifies the important memory allocator performance metrics, as well as the environment and program factors influencing these metrics. Using this knowledge, a tool was designed and implemented. This tool allows to gather and analyze these metrics. The tool provides the ability to create memory allocator usage scenarios for the purpose of the allocator behavior analysis under different conditions. The tool was tested on several available memory allocators with free license.
13

Extending Polyhedral Techniques towards Parallel Specifications and Approximations / Extension des Techniques Polyedriques vers les Specifications Parallelles et les Approximations

Isoard, Alexandre 05 July 2016 (has links)
Les techniques polyédriques permettent d’appliquer des analyses et transformations de code sur des structures multidimensionnelles telles que boucles imbriquées et tableaux. Elles sont en général restreintes aux programmes séquentiels dont le contrôle est affine et statique. Cette thèse consiste à les étendre à des programmes comportant par exemple des tests non analysables ou exprimant du parallélisme. Le premier résultat est l'extension de l’analyse de durée de vie et conflits mémoire, pour les scalaires et les tableaux, à des programmes à spécification parallèle ou approximée. Dans les travaux précédents sur l’allocation mémoire pour laquelle cette analyse est nécessaire, la notion de temps ordonne totalement les instructions entre elles et l’existence de cet ordre est implicite et nécessaire. Nous avons montré qu'il est possible de mener à bien de telles analyses sur un ordre partiel quelconque qui correspondra au parallélisme du programme étudié. Le deuxième résultat est d'étendre les techniques de repliement mémoire, basées sur les réseaux euclidiens, de manière à trouver automatiquement une base adéquate à partir de l'ensemble des conflits mémoire. Cet ensemble est fréquemment non convexe, cas qui était traité de façon insuffisante par les méthodes précédentes. Le dernier résultat applique les deux analyses précédentes au calcul par blocs "pipelinés" et notamment au cas de blocs de taille paramétrique. Cette situation donne lieu à du contrôle non-affine mais peut être traité de manière précise par le choix d’approximations adaptées. Ceci ouvre la voie au transfert efficace de noyaux de calculs vers des accélérateurs tels que GPU, FPGA ou autre circuit spécialisé. / Polyhedral techniques enable the application of analysis and code transformations on multi-dimensional structures such as nested loops and arrays. They are usually restricted to sequential programs whose control is both affine and static. This thesis extend them to programs involving for example non-analyzable conditions or expressing parallelism. The first result is the extension of the analysis of live-ranges and memory conflicts, for scalar and arrays, to programs with parallel or approximated specification. In previous work on memory allocation for which this analysis is required, the concept of time provides a total order over the instructions and the existence of this order is an implicit requirement. We showed that it is possible to carry out such analysis on any partial order which match the parallelism of the studied program. The second result is to extend memory folding techniques, based on Euclidean lattices, to automatically find an appropriate basis from the set of memory conflicts. This set is often non convex, case that was inadequately handled by the previous methods. The last result applies both previous analyzes to "pipelined" blocking methods, especially in case of parametric block size. This situation gives rise to non-affine control but can be processed accurately by the choice of suitable approximations. This paves the way for efficient kernel offloading to accelerators such as GPUs, FPGAs or other dedicated circuit.
14

The use of memory state knowledge to improve computer memory system organization

Isen, Ciji 01 June 2011 (has links)
The trends in virtualization as well as multi-core, multiprocessor environments have translated to a massive increase in the amount of main memory each individual system needs to be fitted with, so as to effectively utilize this growing compute capacity. The increasing demand on main memory implies that the main memory devices and their issues are as important a part of system design as the central processors. The primary issues of modern memory are power, energy, and scaling of capacity. Nearly a third of the system power and energy can be from the memory subsystem. At the same time, modern main memory devices are limited by technology in their future ability to scale and keep pace with the modern program demands thereby requiring exploration of alternatives to main memory storage technology. This dissertation exploits dynamic knowledge of memory state and memory data value to improve memory performance and reduce memory energy consumption. A cross-boundary approach to communicate information about dynamic memory management state (allocated and deallocated memory) between software and hardware viii memory subsystem through a combination of ISA support and hardware structures is proposed in this research. These mechanisms help identify memory operations to regions of memory that have no impact on the correct execution of the program because they were either freshly allocated or deallocated. This inference about the impact stems from the fact that, data in memory regions that have been deallocated are no longer useful to the actual program code and data present in freshly allocated memory is also not useful to the program because the dynamic memory has not been defined by the program. By being cognizant of this, such memory operations are avoided thereby saving energy and improving the usefulness of the main memory. Furthermore, when stores write zeros to memory, the number of stores to the memory is reduced in this research by capturing it as compressed information which is stored along with memory management state information. Using the methods outlined above, this dissertation harnesses memory management state and data value information to achieve significant savings in energy consumption while extending the endurance limit of memory technologies. / text
15

A Memory Allocation Framework for Optimizing Power Consumption and Controlling Fragmentation

Panwar, Ashish January 2015 (has links) (PDF)
Large physical memory modules are necessary to meet performance demands of today's ap- plications but can be a major bottleneck in terms of power consumption during idle periods or when systems are running with workloads which do not stress all the plugged memory resources. Contribution of physical memory in overall system power consumption becomes even more signi cant when CPU cores run on low power modes during idle periods with hardware support like Dynamic Voltage Frequency Scaling. Our experiments show that even 10% of memory allocations can make references to all the banks of physical memory on a long running system primarily due to the randomness in page allocation. We also show that memory hot-remove or memory migration for large blocks is often restricted, in a long running system, due to allocation policies of current Linux VM which mixes movable and unmovable pages. Hence it is crucial to improve page migration for large contiguous blocks for a practical realization of power management support provided by the hardware. Operating systems can play a decisive role in effectively utilizing the power management support of modern DIMMs like PASR(Partial Array Self Refresh) in these situations but have not been using them so far. We propose three different approaches for optimizing memory power consumption by in- ducing bank boundary awareness in the standard buddy allocator of Linux kernel as well as distinguishing user and kernel memory allocations at the same time to improve the movability of memory sections (and hence memory-hotplug) by page migration techniques. Through a set of minimal changes in the standard buddy system of Linux VM, we have been able to reduce the number of active memory banks significantly (upto 80%) as well as to improve performance of memory-hotplug framework (upto 85%).
16

Scheduling Local and Remote Memory in Cluster Computers

Serrano Gómez, Mónica 02 September 2013 (has links)
Los cl'usters de computadores representan una soluci'on alternativa a los supercomputadores. En este tipo de sistemas, se suele restringir el espacio de direccionamiento de memoria de un procesador dado a la placa madre local. Restringir el sistema de esta manera es mucho m'as barato que usar una implementaci'on de memoria compartida entre las placas. Sin embargo, las diferentes necesidades de memoria de las aplicaciones que se ejecutan en cada placa pueden dar lugar a un desequilibrio en el uso de memoria entre las placas. Esta situaci'on puede desencadenar intercambios de datos con el disco, los cuales degradan notablemente las prestaciones del sistema, a pesar de que pueda haber memoria no utilizada en otras placas. Una soluci'on directa consiste en aumentar la cantidad de memoria disponible en cada placa, pero el coste de esta soluci'on puede ser prohibitivo. Por otra parte, el hardware de acceso a memoria remota (RMA) es una forma de facilitar interconexiones r'apidas entre las placas de un cl'uster de computadores. En trabajos recientes, esta caracter'¿stica se ha usado para aumentar el espacio de direccionamiento en ciertas placas. En este trabajo, la m'aquina base usa esta capacidad como mecanismo r'apido para permitir al sistema operativo local acceder a la memoria DRAM instalada en una placa remota. En este contexto, una plani¿caci'on de memoria e¿ciente constituye una cuesti'on cr'¿tica, ya que las latencias de memoria tienen un impacto importante sobre el tiempo de ejecuci'on global de las aplicaciones, debido a que las latencias de memoria remota pueden ser varios 'ordenes de magnitud m'as altas que los accesos locales. Adem'as, el hecho de cambiar la distribuci'on de memoria es un proceso lento que puede involucrar a varias placas, as'¿ pues, el plani¿cador de memoria ha de asegurarse de que la distribuci'on objetivo proporciona mejores prestaciones que la actual. La presente disertaci'on pretende abordar los asuntos mencionados anteriormente mediante la propuesta de varias pol'¿ticas de plani¿caci'on de memoria. En primer lugar, se presenta un algoritmo ideal y una estrategia heur'¿stica para asignar memoria principal ubicada en las diferentes regiones de memoria. Adicionalmente, se ha dise¿nado un mecanismo de control de Calidad de Servicio para evitar que las prestaciones de las aplicaciones en ejecuci'on se degraden de forma inadmisible. El algoritmo ideal encuentra la distribuci'on de memoria 'optima pero su complejidad computacional es prohibitiva dado un alto n'umero de aplicaciones. De este inconveniente se encarga la estrategia heur'¿stica, la cual se aproxima a la mejor distribuci'on de memoria local y remota con un coste computacional aceptable. Los algoritmos anteriores se basan en pro¿ling. Para tratar este defecto potencial, nos centramos en soluciones anal'¿ticas. Esta disertaci'on propone un modelo anal'¿tico que estima el tiempo de ejecuci'on de una aplicaci'on dada para cierta distribuci'on de memoria. Dicha t'ecnica se usa como un predictor de prestaciones que proporciona la informaci'on de entrada a un plani¿cador de memoria. El plani¿cador de memoria usa las estimaciones para elegir din'amicamente la distribuci'on de memoria objetivo 'optima para cada aplicaci'on que se est'e ejecutando en el sistema, de forma que se alcancen las mejores prestaciones globales. La plani¿caci'on a granularidad m'as alta permite pol'¿ticas de plani¿caci'on m'as simples. Este trabajo estudia la viabilidad de plani¿car a nivel de granularidad de p'agina del sistema operativo. Un entrelazado convencional basado en hardware a nivel de bloque y un entrelazado a nivel de p'agina de sistema operativo se han tomado como esquemas de referencia. De la comparaci'on de ambos esquemas de referencia, hemos concluido que solo algunas aplicaciones se ven afectadas de forma signi¿cativa por el uso del entrelazado a nivel de p'agina. Las razones que causan este impacto en las prestaciones han sido estudiadas y han de¿nido la base para el dise¿no de dos pol'¿ticas de distribuci'on de memoria basadas en sistema operativo. La primera se denomina on-demand (OD), y es una estrategia simple que funciona colocando las p'aginas nuevas en memoria local hasta que dicha regi'on se llena, de manera que se bene¿cia de la premisa de que las p'aginas m'as accedidas se piden y se ubican antes que las menos accedidas para mejorar las prestaciones. Sin embargo, ante la ausencia de dicha premisa para algunos de los benchmarks, OD funciona peor. La segunda pol'¿tica, denominada Most-accessed in-local (Mail), se propone con el objetivo de evitar este problema. / Cluster computers represent a cost-effective alternative solution to supercomputers. In these systems, it is common to constrain the memory address space of a given processor to the local motherboard. Constraining the system in this way is much cheaper than using a full-fledged shared memory implementation among motherboards. However, memory usage among motherboards may be unfairly balanced depending on the memory requirements of the applications running on each motherboard. This situation can lead to disk-swapping, which severely degrades system performance, although there may be unused memory on other motherboards. A straightforward solution is to increase the amount of available memory in each motherboard, but the cost of this solution may become prohibitive. On the other hand, remote memory access (RMA) hardware provides fast interconnects among the motherboards of a cluster computer. In recent works, this characteristic has been used to extend the addressable memory space of selected motherboards. In this work, the baseline machine uses this capability as a fast mechanism to allow the local OS to access to DRAM memory installed in a remote motherboard. In this context, efficient memory scheduling becomes a major concern since main memory latencies have a strong impact on the overall execution time of the applications, provided that remote memory accesses may be several orders of magnitude higher than local accesses. Additionally, changing the memory distribution is a slow process which may involve several motherboards, hence the memory scheduler needs to make sure that the target distribution provides better performance than the current one. This dissertation aims to address the aforementioned issues by proposing several memory scheduling policies. First, an ideal algorithm and a heuristic strategy to assign main memory from the different memory regions are presented. Additionally, a Quality of Service control mechanism has been devised in order to prevent unacceptable performance degradation for the running applications. The ideal algorithm finds the optimal memory distribution but its computational cost is prohibitive for a high number of applications. This drawback is handled by the heuristic strategy, which approximates the best local and remote memory distribution among applications at an acceptable computational cost. The previous algorithms are based on profiling. To deal with this potential shortcoming we focus on analytical solutions. This dissertation proposes an analytical model that estimates the execution time of a given application for a given memory distribution. This technique is used as a performance predictor that provides the input to a memory scheduler. The estimates are used by the memory scheduler to dynamically choose the optimal target memory distribution for each application running in the system in order to achieve the best overall performance. Scheduling at a higher granularity allows simpler scheduler policies. This work studies the feasibility of scheduling at OS page granularity. A conventional hardware-based block interleaving and an OS-based page interleaving have been assumed as the baseline schemes. From the comparison of the two baseline schemes, we have concluded that only the performance of some applications is significantly affected by page-based interleaving. The reasons that cause this impact on performance have been studied and have provided the basis for the design of two OS-based memory allocation policies. The first one, namely on-demand (OD), is a simple strategy that works by placing new pages in local memory until this region is full, thus benefiting from the premise that most of the accessed pages are requested and allocated before than the least accessed ones to improve the performance. Nevertheless, in the absence of this premise for some benchmarks, OD performs worse. The second policy, namely Most-accessed in-local (Mail), is proposed to avoid this problem / Serrano Gómez, M. (2013). Scheduling Local and Remote Memory in Cluster Computers [Tesis doctoral]. Editorial Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/31639 / Alfresco
17

Implication fonctionnelle des récepteurs NMDA corticaux au cours des processus de consolidation systémique et d’oubli de la mémoire associative chez le rat / Functional dynamics of cortical NMDA receptors during systems-level memory consolidation and forgetting

Bessieres, Benjamin 31 March 2016 (has links)
Initialement encodés dans l’hippocampe, les nouveaux souvenirs déclaratifs deviennent progressivement dépendants d’un réseau distribué de neurones corticaux au cours de leur maturation dans le temps. Cependant, les mécanismes cellulaires et moléculaires sous-­‐tendant la consolidation et le stockage à long terme de ces nouveaux souvenirs au sein des réseaux corticaux restent à élucider. Les récepteurs N-­‐méthyl-­‐D-­‐aspartate (RNMDA) jouent un rôle essentiel dans l’induction et la régulation des changements synaptiques sous-­‐tendant les processus mnésiques de type associatifs. Sur la base de leurs propriétés biophysiques respectives, nous avons formulé l’hypothèse que la redistribution synaptique des deux formes principales de sous-­‐unités GluN2 exprimées dans le néocortex adulte (GluN2A and GluN2B), pourrait constituer un mécanisme de régulation de la plasticité synaptique supportant l’intégration et la stabilisation progressive des souvenirs au niveau cortical au cours du processus de consolidation mnésique. En combinant, chez le rat adulte, une approche comportementale, biochimique, pharmacologique et des stratégies innovantes consistant à manipuler le trafic de sous-­‐unités des RNMDA à la surface synaptique, nos résultats mettent en évidence un changement cortical dans la composition synaptique en sous unités GluN2, lequel régule la stabilisation progressive de la mémoire à long terme au sein des réseaux corticaux. Nous avons d'abord établi que les RNMDA contenant la sous-­‐unité GluN2B, via leur interaction spécifique avec une protéine clé de la signalisation synaptique, la CaMKII, sont préférentiellement recrutés lors de la phase d’encodage pour permettre l’allocation des nouveaux souvenirs olfactifs associatifs dans un réseau de neurones corticaux spécifique. Au cours du processus de consolidation, nous avons révélé que la redistribution des RNMDA corticaux contenant les sous-­‐unités GluN2B vers l’extérieur ou l’intérieur de l’espace synaptique suite à l’apprentissage, contrôle respectivement la stabilisation de la mémoire à long terme et son oubli au cours du temps. Enfin, renforcer l’acquisition initiale conduit à une augmentation plus rapide du ratio post-­‐synaptique GluN2A/GluN2B et accélère la cinétique du dialogue hippocampo-­‐cortical, ce qui se traduit par une stabilisation accélérée des souvenirs au sein des réseaux corticaux. Pris dans leur ensemble, nos travaux montrent que le trafic des GluN2B-­‐RNMDA corticaux représente un mécanisme cellulaire majeur conditionnant le devenir des traces mnésiques (i.e. stabilisation versus oubli) et apporte un éclairage nouveau sur la façon dont le cerveau organise les souvenirs récents et anciens. / Initially encoded in the hippocampus, new declarative memories are thought to become progressively dependent on a broadly distributed cortical network as they mature and consolidate over time. Although we have a good understanding of the mechanisms underlying the formation of new memories in the hippocampus, little is known about the cellular and molecular mechanisms by which recently acquired information is transformed into remote memories at the cortical level. The N-­‐methyl-­‐D-­‐aspartate receptor (NMDAR) is widely known to be a key player in many aspects of long-­‐term experience-­‐dependent synaptic changes underlying associative memory processes. Based on their distinct biophysical properties, we postulated that the activity-­‐dependent surface dynamics of the two predominant GluN2 subunits (GluN2A and GluN2B) of NMDARs present in the adult neocortex could provide a metaplastic control of synaptic plasticity supporting the progressive embedding and stabilization of long-­‐lasting associative memories within cortical networks during memory consolidation. By combining, in adult rats, behavioral, biochemical, pharmacological and innovative strategies consisting in manipulating trafficking of NMDAR subunits at the cell membrane, our results identify a cortical switch in the synaptic GluN2-­‐containing NMDAR composition which drives the progressive embedding and stabilization of long-­‐lasting memories within cortical networks. We first established that cortical GluN2B-­‐containing NMDARs and their specific interactions with the synaptic signaling CaMKII protein are preferentially recruited upon encoding of associative olfactory memories to enable neuronal allocation, the process via which a new memory trace is thought to be allocated to a given neuronal network. As these memories are progressively processed and embedded into cortical networks, we observed a learning-­‐induced surface redistribution of cortical GluN2B-­‐containing NMDARs outwards or inwards synapses which respectively drives the progressive stabilization and subsequent forgetting of remote memories over time. Finally, increasing the strength, upon encoding, of the initial memory leads to a faster increase of the cortical GluN2A/GluN2B synaptic ratio and accelerates the kinetics of hippocampal-­‐cortical interactions, which translated into a faster stabilization of memories within cortical networks. Taken together, our results provide evidence that GluN2B-­‐NMDAR surface trafficking controls the fate of remote memories (i.e. stabilization versus forgetting), shedding light on a novel mechanism used by the brain to organize recent and remote memories.
18

Klassiska populationsmodeller kontra stokastiska : En simuleringsstudie ur matematiskt och datalogiskt perspektiv

Nilsson, Mattias, Jönsson, Ingela January 2008 (has links)
I detta tvärvetenskapliga arbete studeras från den matematiska sidan tre klassiska populationsmodeller: Malthus tillväxtmodell, Verhulsts logistiska modell och Lotka-Volterras jägarebytesmodell. De klassiska modellerna jämförs med stokastiska. De stokastiska modeller som studeras är födelsedödsprocesser och deras diffusionsapproximation. Jämförelse görs med medelvärdesbildade simuleringar. Det krävs många simuleringar för att kunna genomföra jämförelserna. Dessa simuleringar måste utföras i datormiljö och det är här den datalogiska aspekten av arbetet kommer in. Modellerna och deras resultathantering har implementerats i både MatLab och i C, för att kunna möjliggöra en undersökning om skillnaderna i tidsåtgången mellan de båda språken, under genomförandet av ovan nämnda jämförelser. Försök till tidsoptimering utförs och även användarvänligheten under implementeringen av de matematiska problemen i de båda språken behandlas. Följande matematiska slutsatser har dragits, att de medelvärdesbildade lösningarna inte alltid sammanfaller med de klassiska modellerna när de simuleras på stora tidsintervall. I den logistiska modellen samt i Lotka-Volterras modell dör förr eller senare de stokastiska simuleringarna ut när tiden går mot oändligheten, medan deras deterministiska representation lever vidare. I den exponentiella modellen sammanfaller medelvärdet av de stokastiska simuleringarna med den deterministiska lösningen, dock blir spridningen stor för de stokastiska simuleringarna när de utförs på stora tidsintervall. Datalogiska slutsatser som har dragits är att när det kommer till att implementera få modeller, samt resultatbearbetning av dessa, som ska användas upprepade gånger, är C det bäst lämpade språket då det visat sig vara betydligt snabbare under exekvering än vad MatLab är. Dock måste hänsyn tas till alla de svårigheter som implementeringen i C drar med sig. Dessa svårigheter kan till stor del undvikas om implementeringen istället sker i MatLab, då det därmed finns tillgång till en uppsjö av väl lämpade funktioner och färdiga matematiska lösningar. / In this interdisciplinary study, three classic population models will be studied from a mathematical view: Malthus’ growth, Verhulst’s logistic model and Lotka-Volterra’s model for hunter and prey. The classic models are being compared to the stochastic ones. The stochastic models studied are the birthdeath processes and their diffusion approximation. Comparisons are made by averaging simulations. It requires numerous simulations to carry out the comparisons. The simulations must be carried out on a computer and this is where the computer science emerges to the project. The models, along with the handling of the results, have been implemented in both MatLab and in C in order to allow a comparison between the two languages whilst executing the above mentioned study. Attempts to time optimization and an evaluation concerning the user-friendliness regarding the implementation of mathematical problems will be performed. Mathematic conclusions, which have been drawn, are that the averaging solutions do not always coincide with the traditional models when they are being simulated over large time. In the logistic model and in Lotka-Volterra’s model the stochastic simulations will sooner or later die when the time is moving towards infinity, whilst their deterministic representation keeps on living. In the exponential model, the mean values of the stochastic simulations and of the deterministic solution coincide. There is, however, a large spread for the stochastic simulations when they are carried out over a large time. Computer scientific conclusions drawn from the study includes that when it comes to implementing a few models, along with the handling of the results, to be used repeatedly, C is the most appropriate language as it proved to be significantly faster during execution. However, all of the difficulties during the implementation of mathematical problems in C must be kept in mind. These difficulties can be avoided if the implementation instead takes place in MatLab, where a numerous of mathematical functions and solutions will be available.
19

Klassiska populationsmodeller kontra stokastiska : En simuleringsstudie ur matematiskt och datalogiskt perspektiv

Jönsson, Ingela, Nilsson, Mattias January 2008 (has links)
<p>I detta tvärvetenskapliga arbete studeras från den matematiska sidan tre klassiska populationsmodeller: Malthus tillväxtmodell, Verhulsts logistiska modell och Lotka-Volterras jägarebytesmodell. De klassiska modellerna jämförs med stokastiska. De stokastiska modeller som studeras är födelsedödsprocesser och deras diffusionsapproximation. Jämförelse görs med medelvärdesbildade simuleringar.</p><p>Det krävs många simuleringar för att kunna genomföra jämförelserna. Dessa simuleringar måste utföras i datormiljö och det är här den datalogiska aspekten av arbetet kommer in. Modellerna och deras resultathantering har implementerats i både MatLab och i C, för att kunna möjliggöra en undersökning om skillnaderna i tidsåtgången mellan de båda språken, under genomförandet av ovan nämnda jämförelser. Försök till tidsoptimering utförs och även användarvänligheten under implementeringen av de matematiska problemen i de båda språken behandlas.</p><p>Följande matematiska slutsatser har dragits, att de medelvärdesbildade lösningarna inte alltid sammanfaller med de klassiska modellerna när de simuleras på stora tidsintervall. I den logistiska modellen samt i Lotka-Volterras modell dör förr eller senare de stokastiska simuleringarna ut när tiden går mot oändligheten, medan deras deterministiska representation lever vidare. I den exponentiella modellen sammanfaller medelvärdet av de stokastiska simuleringarna med den deterministiska lösningen, dock blir spridningen stor för de stokastiska simuleringarna när de utförs på stora tidsintervall.</p><p>Datalogiska slutsatser som har dragits är att när det kommer till att implementera få modeller, samt resultatbearbetning av dessa, som ska användas upprepade gånger, är C det bäst lämpade språket då det visat sig vara betydligt snabbare under exekvering än vad MatLab är. Dock måste hänsyn tas till alla de svårigheter som implementeringen i C drar med sig. Dessa svårigheter kan till stor del undvikas om implementeringen istället sker i MatLab, då det därmed finns tillgång till en uppsjö av väl lämpade funktioner och färdiga matematiska lösningar.</p> / <p>In this interdisciplinary study, three classic population models will be studied from a mathematical view: Malthus’ growth, Verhulst’s logistic model and Lotka-Volterra’s model for hunter and prey. The classic models are being compared to the stochastic ones. The stochastic models studied are the birthdeath processes and their diffusion approximation. Comparisons are made by averaging simulations.</p><p>It requires numerous simulations to carry out the comparisons. The simulations must be carried out on a computer and this is where the computer science emerges to the project. The models, along with the handling of the results, have been implemented in both Mat- Lab and in C in order to allow a comparison between the two languages whilst executing the above mentioned study. Attempts to time optimization and an evaluation concerning the user-friendliness regarding the implementation of mathematical problems will be performed.</p><p>Mathematic conclusions, which have been drawn, are that the averaging solutions do not always coincide with the traditional models when they are being simulated over large time. In the logistic model and in Lotka-Volterra’s model the stochastic simulations will sooner or later die when the time is moving towards infinity, whilst their deterministic representation keeps on living. In the exponential model, the mean values of the stochastic simulations and of the deterministic solution coincide. There is, however, a large spread for the stochastic simulations when they are carried out over a large time.</p><p>Computer scientific conclusions drawn from the study includes that when it comes to implementing a few models, along with the handling of the results, to be used repeatedly, C is the most appropriate language as it proved to be significantly faster during execution. However, all of the difficulties during the implementation of mathematical problems in C must be kept in mind. These difficulties can be avoided if the implementation instead takes place in MatLab, where a numerous of mathematical functions and solutions will be available.</p>
20

Klassiska populationsmodeller kontra stokastiska : En simuleringsstudie ur matematiskt och datalogiskt perspektiv

Nilsson, Mattias, Jönsson, Ingela January 2008 (has links)
<p>I detta tvärvetenskapliga arbete studeras från den matematiska sidan tre klassiska populationsmodeller: Malthus tillväxtmodell, Verhulsts logistiska modell och Lotka-Volterras jägarebytesmodell. De klassiska modellerna jämförs med stokastiska. De stokastiska modeller som studeras är födelsedödsprocesser och deras diffusionsapproximation. Jämförelse görs med medelvärdesbildade simuleringar.</p><p>Det krävs många simuleringar för att kunna genomföra jämförelserna. Dessa simuleringar måste utföras i datormiljö och det är här den datalogiska aspekten av arbetet kommer in. Modellerna och deras resultathantering har implementerats i både MatLab och i C, för att kunna möjliggöra en undersökning om skillnaderna i tidsåtgången mellan de båda språken, under genomförandet av ovan nämnda jämförelser. Försök till tidsoptimering utförs och även användarvänligheten under implementeringen av de matematiska problemen i de båda språken behandlas.</p><p>Följande matematiska slutsatser har dragits, att de medelvärdesbildade lösningarna inte alltid sammanfaller med de klassiska modellerna när de simuleras på stora tidsintervall. I den logistiska modellen samt i Lotka-Volterras modell dör förr eller senare de stokastiska simuleringarna ut när tiden går mot oändligheten, medan deras deterministiska representation lever vidare. I den exponentiella modellen sammanfaller medelvärdet av de stokastiska simuleringarna med den deterministiska lösningen, dock blir spridningen stor för de stokastiska simuleringarna när de utförs på stora tidsintervall.</p><p>Datalogiska slutsatser som har dragits är att när det kommer till att implementera få modeller, samt resultatbearbetning av dessa, som ska användas upprepade gånger, är C det bäst lämpade språket då det visat sig vara betydligt snabbare under exekvering än vad MatLab är. Dock måste hänsyn tas till alla de svårigheter som implementeringen i C drar med sig. Dessa svårigheter kan till stor del undvikas om implementeringen istället sker i MatLab, då det därmed finns tillgång till en uppsjö av väl lämpade funktioner och färdiga matematiska lösningar.</p> / <p>In this interdisciplinary study, three classic population models will be studied from a mathematical view: Malthus’ growth, Verhulst’s logistic model and Lotka-Volterra’s model for hunter and prey. The classic models are being compared to the stochastic ones. The stochastic models studied are the birthdeath processes and their diffusion approximation. Comparisons are made by averaging simulations.</p><p>It requires numerous simulations to carry out the comparisons. The simulations must be carried out on a computer and this is where the computer science emerges to the project. The models, along with the handling of the results, have been implemented in both MatLab and in C in order to allow a comparison between the two languages whilst executing the above mentioned study. Attempts to time optimization and an evaluation concerning the user-friendliness regarding the implementation of mathematical problems will be performed.</p><p>Mathematic conclusions, which have been drawn, are that the averaging solutions do not always coincide with the traditional models when they are being simulated over large time. In the logistic model and in Lotka-Volterra’s model the stochastic simulations will sooner or later die when the time is moving towards infinity, whilst their deterministic representation keeps on living. In the exponential model, the mean values of the stochastic simulations and of the deterministic solution coincide. There is, however, a large spread for the stochastic simulations when they are carried out over a large time.</p><p>Computer scientific conclusions drawn from the study includes that when it comes to implementing a few models, along with the handling of the results, to be used repeatedly, C is the most appropriate language as it proved to be significantly faster during execution. However, all of the difficulties during the implementation of mathematical problems in C must be kept in mind. These difficulties can be avoided if the implementation instead takes place in MatLab, where a numerous of mathematical functions and solutions will be available.</p>

Page generated in 0.1214 seconds