91 |
Introducing Real Estate Assets and the Risk of Default in a Stock-flow Consistent FrameworkEffah, Samuel Yao 19 December 2012 (has links)
The first two chapters are dedicated to the modeling and implementation of a stock-flow consistent framework that incorporates real estate as an asset in the portfolio of the household. The third chapter investigates the main determinants of mortgage repayment of Canadian households. This first chapter presents a five-sector stock-flow consistency growth model where the portfolio decision of the households includes their choice of how much real estate they are interested in holding. The primary aim of the chapter is to model the housing market using the stock-flow consistent approach to explain the current global financial problem triggered by the housing market. The model is then simulated to predict the behaviour of various variables and propose appropriate solutions to the financial problem in the hope of returning the economy to a suitable equilibrium. Households' portfolio consists of money deposits, bills, bank equities and real estate. The other sectors that interact with the household sector are the production firms, the banks, the central bank and the government. Aside from the household sector, the banking sector ends up holding some real estate equivalent to the amount of mortgages defaulted by the households. The supply of real estate from the production sector is therefore augmented by the additional ones held by the banks. The second chapter presents the implementation of the stock-flow consistency model of first chapter. The purpose of the chapter is to run a simulation of the model and experiment with shocks to determine the path of the economic variables of the model. Another objective in performing the experiments is to find policies for mitigating the housing crisis. The model is implemented using the Eviews computer modeling software and runs until a stationary steady state is achieved. Various shocks are applied to the baseline stationary state. The results of the monetary policy show that the mortgage rate shock is more effective in influencing the growth rate of the economy as well as controlling the real estate market. Government fiscal policy is also effective in regulating the housing market. A one-period temporary fiscal policy shock is even capable of generating permanent long run growth effects. Household expectations in future housing price increases or future high rates of housing returns have the effect of heating the real estate market without comparable increases in economic growth. Policy makers must keep these expectations in check. The third chapter analyzes the determinants of mortgage repayment options in Canada. With the freedom that comes with being debt-free and owning a home one will assume that households pay off their mortgages as soon as possible. However, there are factors that inhibit households from carrying out these payoffs. The study uses Canadian micro-level data to examine factors that drive households to default, prepay or continue to make regular mortgage payments. The research methodology uses multinomial (polytomous) logistic regression analyzes. The empirical results establish that the traditional mortgage related predictor variables for repayment are statistically significant with the expected signs. The results relating to the provinces are not significantly different from each other. The results did not however provide any significance in relation to mortgage rates and the number of children in the household.
|
92 |
Conformational Transitions in Polymer BrushesRomeis, Dirk 07 April 2014 (has links) (PDF)
A polymer brush is formed by densely grafting the chain ends of polymers onto a surface. This tethering of the long macromolecules has considerable influence on the surface properties, which can be additionally modified by changing the environmental conditions. In this context it is of special interest to understand and control the behavior of the grafted layer and to create surfaces that display a desired response to external stimulation.
The present work studies densely grafted polymer brushes and the effects that such an environment imposes on an individual chain molecule in the grafted layer. For this purpose we developed a new self-consistent field approach to describe mixtures of heterogeneous chains comprised of differently sized hard spheres. Applying this method to the case of polymer brushes we consider a fraction of grafted molecules to be different from the majority brush chains. The modification of these chains includes a variation in the degree of polymerization, a different solvent selectivity behavior and a variable size of the free end-monomer. Due to the computational efficiency of the present approach, as compared for example to direct simulation methods, we can study the conformations of the modified 'guest' chains systematically in dependence of the relevant parameters. With respect to brush profile and the distribution of the free chain ends the new method shows very good quantitative agreement with corresponding simulation results. We also confirm the observation that these 'guest' chains can undergo a conformational transition depending on the type of modification and the solvent quality.
For the cases studied in the present work we analyze the conditions to achieve a most sensitive behavior of this conformational switching. In addition, an analytical model is proposed to describe this effect. We compare its predictions to the numerical results and find good agreement.
|
93 |
Numerical Reaction-transport Model of Lake Dynamics and Their Eutrophication ProcessesStojanovic, Severin 22 September 2011 (has links)
A 1D numerical reaction-transport model (RTM) that is a coupled system of partial differential equations is created to simulate prominent physical and biogeochemical processes and interactions in limnological environments. The prognostic variables considered are temperature, horizontal velocity, salinity, and turbulent kinetic energy of the water column, and the concentrations of phytoplankton, zooplankton, detritus, phosphate (H3PO4), nitrate (NO3-), ammonium (NH4+), ferrous iron (Fe2+), iron(III) hydroxide (Fe(OH)3(s)), and oxygen (O2) suspended within the water column. Turbulence is modelled using the k-e closure scheme as implemented by Gaspar et al. (1990) for oceanic environments. The RTM is used to demonstrate how it is possible to investigate limnological trophic states by considering the problem of eutrophication as an example. A phenomenological investigation of processes leading to and sustaining eutrophication is carried out. A new indexing system that identifies different trophic states, the so-called Self-Consistent Trophic State Index (SCTSI), is proposed. This index does not rely on empirical measurements that are then compared to existing tables for classifying limnological environments into particular trophic states, for example, the concentrations of certain species at certain depths to indicate the trophic state, as is commonly done in the literature. Rather, the index is calculated using dynamic properties of only the limnological environment being considered and examines how those properties affect the sustainability of the ecosystem. Specifically, the index is calculated from a ratio of light attenuation by the ecosystem’s primary biomass to that of total light attenuation by all particulate species and molecular scattering throughout the entire water column. The index is used to probe various simulated scenarios that are believed to be relevant to eutrophication: nutrient loading, nutrient limitation, overabundance of phytoplankton, solar-induced turbulence, and wind-induced turbulence.
|
94 |
N-representable density matrix perturbation theory / Théorie des perturbations en matrice densité N-représentableDianzinga, Mamy Rivo 07 December 2016 (has links)
Alors que les approches standards de résolution de la structure électronique présentent un coût de calcul à la puissance 3 par rapport à la complexité du problème, des solutions permettant d’atteindre un régime asymptotique linéaire,O(N), sont maintenant bien connues pour le calcul de l'état fondamental. Ces solutions sont basées sur la "myopie" de la matrice densité et le développement d'un cadre théorique permettant de contourner le problème aux valeurs propres. La théorie des purifications de la matrice densité constitue une branche de ce cadre théorique. Comme pour les approches de type O(N) appliquées à l'état fondamental,la théorie des perturbations nécessaire aux calculs des fonctions de réponse électronique doit être révisée pour contourner l'utilisation des routines coûteuses.L'objectif est de développer une méthode robuste basée uniquement sur la recherche de la matrice densité perturbée, pour laquelle seulement des multiplications de matrices creuses sont nécessaires. Dans une première partie,nous dérivons une méthode de purification canonique qui respecte les conditions de N-representabilité de la matrice densité à une particule. Nous montrons que le polynôme de purification obtenu est auto-cohérent et converge systématiquement vers la bonne solution. Dans une seconde partie, en utilisant une approche de type Hartree-Fock, nous appliquons cette méthode aux calculs des tenseurs de réponses statiques non-linéaires pouvant être déterminés par spectroscopie optique. Au delà des calculs à croissance linéaire réalisés, nous démontrons que les conditions N-representabilité constituent un prérequis pour garantir la fiabilité des résultats. / Whereas standard approaches for solving the electronic structures present acomputer effort scaling with the cube of the number of atoms, solutions to overcomethis cubic wall are now well established for the ground state properties, and allow toreach the asymptotic linear-scaling, O(N). These solutions are based on thenearsightedness of the density matrix and the development of a theoreticalframework allowing bypassing the standard eigenvalue problem to directly solve thedensity matrix. The density matrix purification theory constitutes a branch of such atheoretical framework. Similarly to earlier developments of O(N) methodology appliedto the ground state, the perturbation theory necessary for the calculation of responsefunctions must be revised to circumvent the use of expensive routines, such asmatrix diagonalization and sum-over-states. The key point is to develop a robustmethod based only on the search of the perturbed density matrix, for which, ideally,only sparse matrix multiplications are required. In the first part of this work, we derivea canonical purification, which respects the N-representability conditions of the oneparticledensity matrix for both unperturbed and perturbed electronic structurecalculations. We show that this purification polynomial is self-consistent andconverges systematically to the right solution. As a second part of this work, we applythe method to the computation of static non-linear response tensors as measured inoptical spectroscopy. Beyond the possibility of achieving linear-scaling calculations,we demonstrate that the N-representability conditions are a prerequisite to ensurereliability of the results.
|
95 |
Paralelização do cálculo de estruturas de bandas de semicondutores usando o High Performance Fortran / Semiconductors band structure calculus paralelization using High Performance FortranRodrigo Daniel Malara 14 January 2005 (has links)
O uso de sistemas multiprocessados para a resolução de problemas que demandam um grande poder computacional tem se tornado cada vez mais comum. Porém a conversão de programas seqüenciais para programas concorrentes ainda não é uma tarefa trivial. Dentre os fatores que tornam esta tarefa difícil, destacamos a inexistência de um paradigma único e consolidado para a construção de sistemas computacionais paralelos e a existência de várias plataformas de programação para o desenvolvimento de programas concorrentes. Nos dias atuais ainda é impossível isentar o programador da especificação de como o problema será particionado entre os vários processadores. Para que o programa paralelo seja eficiente, o programador deve conhecer a fundo aspectos que norteiam a construção do hardware computacional paralelo, aspectos inerentes à arquitetura onde o software será executado e à plataforma de programação concorrente escolhida. Isto ainda não pode ser mudado. O ganho que podemos obter é na implementação do software paralelo. Esta tarefa pode ser trabalhosa e demandar muito tempo para a depuração, pois as plataformas de programação não possibilitam que o programador abstraia dos elementos de hardware. Tem havido um grande esforço na criação de ferramentas que otimizem esta tarefa, permitindo que o programador se expresse mais fácil e sucintamente quanto à para1elização do programa. O presente trabalho se baseia na avaliação dos aspectos ligados à implementação de software concorrente utilizando uma plataforma de portabilidade chamada High Performance Fortran, aplicado a um problema específico da física: o cálculo da estrutura de bandas de heteroestruturas semicondutoras. O resultado da utilização desta plataforma foi positivo. Obtivemos um ganho de performance superior ao esperado e verificamos que o compilador pode ser ainda mais eficiente do que o próprio programador na paralelização de um programa. O custo inicial de desenvolvimento não foi muito alto, e pode ser diluído entre os futuros projetos que venham a utilizar deste conhecimento pois após a fase de aprendizado, a paralelização de programas se torna rápida e prática. A plataforma de paralelização escolhida não permite a paralelização de todos os tipos de problemas, apenas daqueles que seguem o paradigma de paralelismo por dados, que representam uma parcela considerável dos problemas típicos da Física. / The employment of multiprocessor systems to solve problems that demand a great computational power have become more and more usual. Besides, the conversion of sequential programs to concurrent ones isn\'t trivial yet. Among the factors that makes this task difficult, we highlight the nonexistence of a unique and consolidated paradigm for the parallel computer systems building and the existence of various programming platforms for concurrent programs development. Nowadays it is still impossible to exempt the programmer of the specification about how the problem will be partitioned among the various processors. In order to have an efficient parallel program the programmer have to deeply know subjects that heads the parallel hardware systems building, the inherent architecture where the software will run and the chosen concurrent programming platform. This cannot be changed yet. The gain is supposed to be on the parallel software implementation. This task can be very hard and consume so much time on debugging it, because the programming platforms do not allow the programmer to abstract from the hardware elements. It has been a great effort in the development of tools that optimize this task, allowing the programmer to work easily and briefly express himself concerning the software parallelization. The present work is based on the evaluation of aspects linked to the concurrent software implementation using a portability platform called High Performance Fortran, applied to a physics specific problem: the calculus of semiconductor heterostructures? valence band structure. The result of the use of this platform use was positive. We obtained a performance gain superior than we expected and we could assert that the compiler is able to be more effective than the programmer on the paralelization of a program. The initial development cost wasn\'t so high and it can be diluted between the next projects that would use the acquired knowledge, because after the learning phase, the programs parallelization task becomes quick and practical. The chosen parallelization platform does not allow the parallelization of all kinds of problems, but just the ones that follow the data parallelism paradigm that represents a considerable parcel of tipical Physics problems.
|
96 |
Implementation and Evaluation of Historical Consistent Neural Networks Using Parallel Computing / Implementation och utvärdering av Historical Consistent Neural Networks med parallella beräkningarBjarnle, Johan, Holmström, Elias January 2015 (has links)
Forecasting the stock market is well-known to be a very complex and difficult task, and even by many considered to be impossible. The new model, emph{Historical Consistent Neural Networks} (HCNN), has recently been successfully applied for prediction and risk estimation on the energy markets. HCNN is developed by Dr. Hans Georg Zimmermann, Siemens AG, Corporate Technology Dpt., Munich, and implemented in the SENN (Simulation Environment for Neural Network) package, distributed by Siemens. The evalution is made by tests on a large database of historical price data for global indicies, currencies, commodities and interest rates. Tests have been done, using the Linux version of the SENN package, provided by Dr. Zimmermann and his research team. This thesis takes on the task given by Eturn Fonder AB, to develop a sound basis for evaluating and using HCNN, in a fast and easy manner. An important part of our work has been to develop a rapid and improved implementation of HCNN, as an interactive software package. Our approach has been to take advantage of the parallelization capabilities of the graphics card, using the CUDA library together with an intuitive and flexible interface for HCNN built in MATLAB. We can show that the computational power of our CUDA implementation (using a cheap graphics device), compared to SENN, is about 33 times faster. With our new optimized implementation of HCNN, we have been able to test the model on large data sets, consisting of multidimensional financial time series. We present the results with respect to some common statistical measures, evaluates the prediction qualities and performance of HCNN, and give our analysis of how to move forward and do further testing.
|
97 |
Topics on open economy macroeconomics : a stock-flow consistent approach / Topiques en macroéconomie des économies ouvertes : une approche stock-flux cohérenteValdecantos Halporn, Sebastian 13 April 2015 (has links)
Cette thèse présente une série d'études théoriques partageant une méthodologie commune: l'utilisation des modèles stock-flux cohérents. Sur la base de l'échec de l'outil d'analyse économique traditionnel, les modèles DSGE, je tente de montrer quels sont les principaux inconvénients de ces modèles, qui comprennent à la fois des problèmes méthodologiques et l'omission de certains aspects de la réalité qui sont cruciales (par exemple, le rôle de la monnaie et des marchés financiers). Dans le premier chapitre de cette thèse je montre pourquoi les modèles stock-flux cohérents offrent un véhicule plus précis à la compréhension des économies modernes. Ces raisons, qui sont reliés à une préoccupation plus élevé avec le réalisme, la précision comptable et l'interaction entre les différents agents économiques et des institutions sociales, expliquent pourquoi les modèles stock-flux cohérents ont réussi à détecter les instabilités qui se accumulaient dans les années avant l'éclatement de la crise financière mondiale. Après avoir expliqué la motivation pour étudier la dynamique macroéconomique par des modèles stock-flux cohérents je présente trois chapitres dans lesquels ces modèles sont adaptés à l'étude de certains problèmes spécifiques du monde réel, qui ont été et sont toujours pertinentes et ont une place privilégiée dans le agenda politique. Dans le deuxième chapitre, je étudie certaines des différentes alternatives pour la réforme du système monétaire international qui ont été proposées depuis la fin de la Seconde Guerre mondiale. A partir d'un modèle qui décrit l'état actuel des choses, il est montré comment ce modèle peut être modifié afin d'examiner comment chacun des solutions alternatives pourraient fonctionner. Ces solutions comprennent des options qui ont été largement débattues depuis des décennies, comme l'introduction du DTS (la monnaie émise par le FMI) et le bancor (la monnaie internationale que Keynes a proposé, avec la création d'une chambre de compensation internationale). Après avoir construit les modèles, des exercices de simulation sont menées. Ces expériences montrent de quelle façon chaque solution pourrait offrir un meilleur environnement mondial pour le développement des relations économiques internationales. En particulier, il est constaté que la mise en place d'une chambre de compensation comme laquelle Keynes a proposé ne serait pas seulement avantageuse à réduire les déséquilibres mondiaux, mais aussi elle pourrait produire un haut niveau de demande effective dans une échelle mondiale... / His thesis presents a series of theoretical studies sharing a common methodology: the use of stock-‐flow consistent models. Based on the failure of the state of the art analytical tool of the mainstream, the so-‐called DSGE models, I attempt to show what are the main drawbacks of these models, which include both methodological problems and the omission of some aspects of reality that are crucial (e.g., the role of money and financial markets). In the first chapter of this thesis I show why stock-‐flow consistent models offer a more accurate vehicle to the understanding of modern economies. These reasons, which are connected to a higher concern with realism, accounting accuracy and the interaction between the different economic agents and social institutions, explain why stock-‐flow consistent models have been successful in detecting the instabilities that were accumulating in the years before the outbreak of the global financial crisis...
|
98 |
Numerical Reaction-transport Model of Lake Dynamics and Their Eutrophication ProcessesStojanovic, Severin January 2011 (has links)
A 1D numerical reaction-transport model (RTM) that is a coupled system of partial differential equations is created to simulate prominent physical and biogeochemical processes and interactions in limnological environments. The prognostic variables considered are temperature, horizontal velocity, salinity, and turbulent kinetic energy of the water column, and the concentrations of phytoplankton, zooplankton, detritus, phosphate (H3PO4), nitrate (NO3-), ammonium (NH4+), ferrous iron (Fe2+), iron(III) hydroxide (Fe(OH)3(s)), and oxygen (O2) suspended within the water column. Turbulence is modelled using the k-e closure scheme as implemented by Gaspar et al. (1990) for oceanic environments. The RTM is used to demonstrate how it is possible to investigate limnological trophic states by considering the problem of eutrophication as an example. A phenomenological investigation of processes leading to and sustaining eutrophication is carried out. A new indexing system that identifies different trophic states, the so-called Self-Consistent Trophic State Index (SCTSI), is proposed. This index does not rely on empirical measurements that are then compared to existing tables for classifying limnological environments into particular trophic states, for example, the concentrations of certain species at certain depths to indicate the trophic state, as is commonly done in the literature. Rather, the index is calculated using dynamic properties of only the limnological environment being considered and examines how those properties affect the sustainability of the ecosystem. Specifically, the index is calculated from a ratio of light attenuation by the ecosystem’s primary biomass to that of total light attenuation by all particulate species and molecular scattering throughout the entire water column. The index is used to probe various simulated scenarios that are believed to be relevant to eutrophication: nutrient loading, nutrient limitation, overabundance of phytoplankton, solar-induced turbulence, and wind-induced turbulence.
|
99 |
Introducing Real Estate Assets and the Risk of Default in a Stock-flow Consistent FrameworkEffah, Samuel Yao January 2012 (has links)
The first two chapters are dedicated to the modeling and implementation of a stock-flow consistent framework that incorporates real estate as an asset in the portfolio of the household. The third chapter investigates the main determinants of mortgage repayment of Canadian households. This first chapter presents a five-sector stock-flow consistency growth model where the portfolio decision of the households includes their choice of how much real estate they are interested in holding. The primary aim of the chapter is to model the housing market using the stock-flow consistent approach to explain the current global financial problem triggered by the housing market. The model is then simulated to predict the behaviour of various variables and propose appropriate solutions to the financial problem in the hope of returning the economy to a suitable equilibrium. Households' portfolio consists of money deposits, bills, bank equities and real estate. The other sectors that interact with the household sector are the production firms, the banks, the central bank and the government. Aside from the household sector, the banking sector ends up holding some real estate equivalent to the amount of mortgages defaulted by the households. The supply of real estate from the production sector is therefore augmented by the additional ones held by the banks. The second chapter presents the implementation of the stock-flow consistency model of first chapter. The purpose of the chapter is to run a simulation of the model and experiment with shocks to determine the path of the economic variables of the model. Another objective in performing the experiments is to find policies for mitigating the housing crisis. The model is implemented using the Eviews computer modeling software and runs until a stationary steady state is achieved. Various shocks are applied to the baseline stationary state. The results of the monetary policy show that the mortgage rate shock is more effective in influencing the growth rate of the economy as well as controlling the real estate market. Government fiscal policy is also effective in regulating the housing market. A one-period temporary fiscal policy shock is even capable of generating permanent long run growth effects. Household expectations in future housing price increases or future high rates of housing returns have the effect of heating the real estate market without comparable increases in economic growth. Policy makers must keep these expectations in check. The third chapter analyzes the determinants of mortgage repayment options in Canada. With the freedom that comes with being debt-free and owning a home one will assume that households pay off their mortgages as soon as possible. However, there are factors that inhibit households from carrying out these payoffs. The study uses Canadian micro-level data to examine factors that drive households to default, prepay or continue to make regular mortgage payments. The research methodology uses multinomial (polytomous) logistic regression analyzes. The empirical results establish that the traditional mortgage related predictor variables for repayment are statistically significant with the expected signs. The results relating to the provinces are not significantly different from each other. The results did not however provide any significance in relation to mortgage rates and the number of children in the household.
|
100 |
Development and validation of a new mass-consistent model using terrain-influenced coordinates / Utveckling och utvärdering av en ny ’Mass-Consistent Model’ med terränginfluerat koordinatsystemMagnusson, Linus January 2005 (has links)
Simulations of the wind climate in complex terrain may be useful in many cases, e.g. for wind energy mapping. In this study a new mass-consistent model (MCM), the λ-model, was developed and the ability of the model was examined. In the model an initial wind field is adjusted to fulfill the requirement of being non-divergent at all points. The advance of the λ- model compared with previous MCM:s is the use of a terrain-influenced coordinate system. Except the wind field, the model parameters include constants α, one for each direction. Those constants have no obvious physical meaning and have to be determined empirically. To determine the ability and quality of the λ-model, the results were compared with results from the mesoscale MIUU-model. Firstly, comparisons were made for a Gauss-shaped hill, to find situations which are not caught by the λ-model, e.g. wakes and thermal effects. During daytime the results from the λ-model were good but the model fails during nighttime. From the comparisons between the models the importance of the α-constants were studied. Secondly, comparisons between the models were made for real terrain. Wind data from the MIUU-model with resolution 5 km was used as input data and was interpolated to a 1 km grid and made non-divergent by the λ-model. To study the quality of the results, they were compared with simulations from the MIUU-model with resolution 1 km. The results are quite accurate, after adjusting for a difference in mean wind speed between MIUU-model runs on 1km and 5 km resolution. Good results from the λ-model were reached if a climate average wind speed was calculated from several simulations with different wind directions. Especially if the mean wind speed for the domain in the λ-model was modified to the same level as in the MIUU 1 km. The λ-model may be a useful tool as the results were found to be reasonable good for many cases. But the user must be aware of situations when the model fails. Future studies could be done to investigate if the λ-model is useable for resolutions down to 100 meters. / Modellering av vindklimat i komplex terräng är användbart i många sammanhang, t ex vid vindkartering för vindenergi. I den här studien utvecklas och undersöks användbarheten av en sk. Mass-Consistent Model, λ-modellen. Modellen bygger på att ett initialt vindfält justeras för att uppfylla kontinuitetsekvationen i alla punkter. För att göra vindfältet divergensfritt används en metod som bygger på variationskalkyl. Fördelen med denna nya modell jämfört med tidigare är användandet av ett terränginfluerat koordinatsystem. I teorin för λ-modellen införs en parameter α. Då denna inte har någon självklar fysikalisk betydelse behöver den bestämmas empiriskt. För att undersöka kvalitén hos λ-modellen gjordes jämförelser med den mesoskaliga MIUU-modellen. Det första steget var att jämföra körningar över en Gaussformad kulle, detta för att jämföra modellerna och finna situationer som λ-modellen inte löser upp. Exempel på sådana är termiska effekter och vakar. Resultaten under dagtid var bra medan under nattetid var det stora skillnader mellan modellerna. Utifrån resultaten kunde betydelsen av α-parametern studeras. Nästa steg var att jämföra med verklig terräng. Detta gjordes för ett område i Norrbotten. Här användes vinddata från MIUU-modellen med upplösning 5 km som indata för att beräkna vinden på en skala 1 km. För att undersöka kvalitén hos λ-modellen användes data från MIUU-modellen med upplösning 1 km som jämförelse. Resultaten avseende vindvariationerna i terrängen är tillfredställande, dock med något för höga vindhastigheter i λ-modellen. Detta visade sig bero på för högre medelvind i MIUU 5 km än i MIUU 1 km. Jämförelse mellan modellerna gjordes även för Suorva-dalen i Lappland vilken omges av bergig terräng. Resultaten här var sämre avseende medelvindarna, men med bättre resultat avseende vindriktningarna. Bra resultat för λ-modellen nåddes då resultat från flera simuleringar slogs samman till ett medelvärde. Framförallt blev resultatet bra då medelvinden justerades till samma nivå som MIUU 1 km. Sammanfattningsvis kan sägas att resultaten från λ-modellen är rimliga i många situationer men att det är viktigt att veta i vilka situationer den inte fungerar. Framtida undersökningar bör göras för att undersöka om modellen är användbar för upplösningar ner till ca 100 meter.
|
Page generated in 0.0856 seconds