Spelling suggestions: "subject:"computationalstatistics"" "subject:"computationalmechanics""
361 |
Evaluation and visualization of complexity in parameter setting in automotive industryLunev, Alexey January 2018 (has links)
Parameter setting is a process primary used to specify in what kind of vehicle an electronic control unit of each type is used. This thesis is targeted to investigate whether the current strategy to measure complexity gives user satisfactory results. The strategy consists of structure-based algorithms that are an essential part of the Complexity Analyzer - a prototype application used to evaluate the complexity. The results described in this work suggest that the currently implemented algorithms have to be properly defined and adapted to be used in terms of parameter setting. Moreover, the measurements that the algorithms output has been analyzed in more detail making the results easier to interpret. It has been shown that a typical parameter setting file can be regarded as a tree structure. To measure variation in this structure a new concept, called Path entropy has been formulated, tested and implemented. The main disadvantage of the original version of the Complexity Analyzer application is its lack of user-friendliness. Therefore, a web version of the application based on Model-View-Controller technique has been developed. Different to the original version it has user interface included and it takes just a couple of seconds to see the visualization of data, compared to the original version where it took several minutes to run the application.
|
362 |
Monte Carlo Path Simulation and the Multilevel Monte Carlo MethodJanzon, Krister January 2018 (has links)
A standard problem in the field of computational finance is that of pricing derivative securities. This is often accomplished by estimating an expected value of a functional of a stochastic process, defined by a stochastic differential equation (SDE). In such a setting the random sampling algorithm Monte Carlo (MC) is useful, where paths of the process are sampled. However, MC in its standard form (SMC) is inherently slow. Additionally, if the analytical solution to the underlying SDE is not available, a numerical approximation of the process is necessary, adding another layer of computational complexity to the SMC algorithm. Thus, the computational cost of achieving a certain level of accuracy of the estimation using SMC may be relatively high. In this thesis we introduce and review the theory of the SMC method, with and without the need of numerical approximation for path simulation. Two numerical methods for path approximation are introduced: the Euler–Maruyama method and Milstein's method. Moreover, we also introduce and review the theory of a relatively new (2008) MC method – the multilevel Monte Carlo (MLMC) method – which is only applicable when paths are approximated. This method boldly claims that it can – under certain conditions – eradicate the additional complexity stemming from the approximation of paths. With this in mind, we wish to see whether this claim holds when pricing a European call option, where the underlying stock process is modelled by geometric Brownian motion. We also want to compare the performance of MLMC in this scenario to that of SMC, with and without path approximation. Two numerical experiments are performed. The first to determine the optimal implementation of MLMC, a static or adaptive approach. The second to illustrate the difference in performance of adaptive MLMC and SMC – depending on the used numerical method and whether the analytical solution is available. The results show that SMC is inferior to adaptive MLMC if numerical approximation of paths is needed, and that adaptive MLMC seems to meet the complexity of SMC with an analytical solution. However, while the complexity of adaptive MLMC is impressive, it cannot quite compensate for the additional cost of approximating paths, ending up roughly ten times slower than SMC with an analytical solution.
|
363 |
Boundary Shape Optimization Using the Material Distribution ApproachKasolis, Fotios January 2011 (has links)
No description available.
|
364 |
A Gaussian Mixture Model based Level Set Method for Volume Segmentation in Medical ImagesWebb, Grayson January 2018 (has links)
This thesis proposes a probabilistic level set method to be used in segmentation of tumors with heterogeneous intensities. It models the intensities of the tumor and surrounding tissue using Gaussian mixture models. Through a contour based initialization procedure samples are gathered to be used in expectation maximization of the mixture model parameters. The proposed method is compared against a threshold-based segmentation method using MRI images retrieved from The Cancer Imaging Archive. The cases are manually segmented and an automated testing procedure is used to find optimal parameters for the proposed method and then it is tested against the threshold-based method. Segmentation times, dice coefficients, and volume errors are compared. The evaluation reveals that the proposed method has a comparable mean segmentation time to the threshold-based method, and performs faster in cases where the volume error does not exceed 40%. The mean dice coefficient and volume error are also improved while achieving lower deviation.
|
365 |
Sistema de infer?ncia Fuzzy para estimativa da umidade do solo sob influ?ncia do teor de mat?ria org?nica / Fuzzy inference system for estimating the soil moisture under the influence of organic matter contentTriani, T?rcio de Sampaio 22 June 2015 (has links)
Submitted by Celso Magalhaes (celsomagalhaes@ufrrj.br) on 2017-06-01T11:43:57Z
No. of bitstreams: 1
2015 - T?rcio de Sampaio Triani.pdf: 2326230 bytes, checksum: 297ea4ee1d9f2251260ab1c73fb44502 (MD5) / Made available in DSpace on 2017-06-01T11:43:57Z (GMT). No. of bitstreams: 1
2015 - T?rcio de Sampaio Triani.pdf: 2326230 bytes, checksum: 297ea4ee1d9f2251260ab1c73fb44502 (MD5)
Previous issue date: 2015-06-22 / The study of soil water dynamics has been growing across the need to optimize
the use of water resources for the maintenance of agricultural productivity. In
order to assist this study, different models of soil water dynamics has been
created and studied in an attempt to predict situations that empirically become
time-consuming and expensive. The soil water dynamics is directly associated
with physical and hydric parameters, as well as the soil moisture. To determine
the soil moisture, there are techniques that require a large amount of samples,
increasing the cost and time required to perform such measurements.This work
continues the dissertation of Belleza. A model based on fuzzy rules to estimate
the moisture in topsoil from soil texture data, matric potential and amount of
organic matter is elaborated. The distinction and analysis made by the model
fall under the influence of organic matter on the soil water retention,
disregarded by most studies of this type. The data set used for training and
validation of the model comes from a research project conducted in the
Amazon region, organized in a report funded by Petrobras. The results,
obtained by simulation performed in the software Matlab, show that the
organic matter has great influence in soil water retention of soils which clay
content is under 35%. A significant decrease of total mean error in relation
with the work of Belleza, which ignores the influence of organic matter, is
observed. The increase in the number of the fuzzy inference system rules also
allow a better approximation of the estimated values to the real moisture
values. Taking into account the uncertainties inherent to the phenomenon this
model is considered appropriate, due to its simplicity and relatively low
average of errors, and an evolution in the field of modeling the soil moisture
estimation by fuzzy logic. / O estudo da din?mica da ?gua no solo tem sido crescente frente ? necessidade
de otimiza??o de uso de recursos h?dricos para a manuten??o da produtividade
agr?cola. Como forma de auxiliar esse estudo, diferentes modelos de din?mica
da ?gua no solo t?m sido criados e estudados, em uma tentativa de se prever
situa??es que empiricamente se tornam demoradas e custosas. A din?mica da
?gua no solo est? associada diretamente a par?metros f?sico-h?dricos do solo,
assim como a umidade do solo. Para se determinar a umidade do solo existem
t?cnicas que necessitam de uma grande quantidade de amostragens, elevando o
custo e o tempo necess?rio para realizar tais medi??es. Este trabalho d?
continuidade ? disserta??o de Belleza. ? elaborado um modelo baseado em
regras fuzzy para estimar a umidade em camadas superficiais do solo a partir de
dados de textura do solo, potencial matricial e quantidade de mat?ria org?nica.
A distin??o e an?lise feitos pelo modelo recaem sobre a influ?ncia da mat?ria
org?nica sobre a reten??o de umidade pelo solo, desconsiderada pela maioria
dos trabalhos deste tipo. O conjunto de dados utilizado para treinamento e
valida??o do modelo ? proveniente de um projeto de pesquisa realizado na
regi?o amaz?nica, organizado em relat?rio financiado pela Petrobras. Os
resultados, obtidos atrav?s de simula??o realizada no software Matlab,
demonstram que a mat?ria org?nica possui grande influ?ncia na reten??o de
umidade por solos cujo teor de argila esteja abaixo de 35%. ? observada uma
redu??o significativa do erro absoluto m?dio total em rela??o ao trabalho de
Belleza, que desconsidera a influ?ncia da mat?ria org?nica. O aumento do
n?mero de regras do sistema de infer?ncia fuzzy permite tamb?m uma melhor
aproxima??o das estimativas do valor real de umidade. Levando em conta as
incertezas inerentes ao fen?meno este modelo ? considerado adequado devido
a sua simplicidade e m?dia de erros relativamente baixa, e uma evolu??o no
campo da modelagem da estimativa de umidade do solo por l?gica fuzzy.
|
366 |
American Option Price Approximation for Real-Time ClearingBlanck, Andreas January 2018 (has links)
American-style options are contracts traded on financial markets. These are derivatives of some underlying security or securities that in contrast to European-style options allow their holders to exercise at any point before the contracts expire. However, this advantage aggravates the mathematical formulation of an option's value considerably, explaining why essentially no exact closed-formed pricing formulas exist. Numerous price approximation methods are although available, but their possible areas of application as well as performance, measured by speed and accuracy, differ. A clearing house offering real-time solutions are especially dependent on fast pricing methods to calculate portfolio risk, where accuracy is assumed to be an important factor to guarantee low-discrepancy estimations. Conversely, overly biased risk estimates may worsen a clearing house's ability to manage great losses, endangering the stability of a financial market it operates. The purpose of this project was to find methods with optimal performance and to investigate if price approximation errors induce biases in option portfolios' risk estimates. Regarding performance, a Quasi-Monte Carlo least squares method was found suitable for at least one type of exotic option. Yet none of the analyzed closed-form approximation methods could be assessed as optimal because of their varying strengths, where although the Binomial Tree model performed most consistently. Moreover, the answer to which method entails the best risk estimates remains inconclusive since only one set of parameters was used due to heavy calculations. A larger study involving a broader range of parameter values must therefore be performed in order to answer this reliably. However, it was revealed that large errors in risk estimates are avoided only if American standard options are priced with any of the analyzed methods and not when a faster European formula is employed. Furthermore, those that were analyzed can yield rather different risk estimates, implying that relatively large errors may arise if an inadequate method is applied.
|
367 |
Hybrid CPU-GPU Parallel Simulations of 3D Front PropagationKrishnasamy, Ezhilmathi January 2014 (has links)
This master thesis studies GPU-enabled parallel implementations of the 3D Parallel Marching Method (PMM). 3D PMM is aimed at solving the non-linear static Jacobi-Hamilton equations, which has real world applications such as in the study of geological foldings, where each layer of the Earth’s crust is considered as a front propagating over time. Using the parallel computer architectures, fast simulationscan be achieved, leading to less time consumption, quicker understanding of the inner Earth and enables early exploration of oil and gas reserves. Currently 3D PMM is implemented in shared memory architecture using OpenMP Application Programming Interface (API) and the MINT programming model, which translates C code into Compute Unified Device Architecture (CUDA) code for a single Graphical Process Unit (GPU). Parallel architectures have seen rapid growth in recent years, especially GPUs, allowing us to do faster simulations. In this thesis work, a new parallel implementation for 3D PMM has been done to exploit multicore CPU architectures as well as single and multiple GPUs. In a multiple GPU implementation, 3D data isdecomposed into 1D data for each GPU. CUDA streams are used to overlap the computation and communication within the single GPU. Part of the decomposed 3D volume data is kept in the respective GPU to avoid complete data transfer between the GPUs over a number of iterations. In total, there are two kinds of datatransfers that are involved while doing computation in the multiple GPUs: boundary value data transfer and decomposed 3D volume data transfer. The decomposed 3D volume data transfer is optimized between the multiple GPUs by using the peer to peer memory transfer in CUDA. The speedup is shown and compared between shared memory CPUs (E5-2660, 16cores), single GPU (GTX-590, C2050 and K20m) and multiple GPUs. Hand coded CUDA has shown slightly better performance than the Mint translated CUDA, and the multiple GPU implementation showed promising speedup compared to shared memory multicore CPUs and single GPU implementations.
|
368 |
Development of a Digital Mortar Aiming SystemWåglund, Oskar January 2015 (has links)
In this thesis, the plausibility of developing a portable light-weight artillery computer has been investigated. The main goal of the project has been to replace the traditional methods that the Swedish Armed Forces are using today to find firing solutions for their mortar, the GRK m/84. A computational core has been written in Java that simulates the trajectory of a shell using the model in NATO's STANAG 4355. The developed system finds firing solutions by using shooting methods and the multi-dimensional Newton Raphson's method. A Graphical User Interface (GUI) tailored to mobile computers has been designed in Android. The computational core along with the GUI has been installed on a rugged hand held computer and the whole unit has been tested at Markstridsskolan (MSS). The tests showed that the computational core delivers firing solutions that coincide very well with the actual firing solutions needed to hit the desired target.
|
369 |
Photogrammetric methods for calculating the dimensions of cuboids from images / Fotogrammetriska metoder för beräkning av dimensionerna på rätblock från bilderLennartsson, Louise January 2015 (has links)
There are situations where you would like to know the size of an object but do not have a ruler nearby. However, it is likely that you are carrying a smartphone that has an integrated digital camera, so imagine if you could snap a photo of the object to get a size estimation. Different methods for finding the dimensions of a cuboid from a photography are evaluated in this project. A simple Android application implementing these methods has also been created. To be able to perform measurements of objects in images we need to know how the scene is reproduced by the camera. This depends on the traits of the camera, called the intrinsic parameters. These parameters are unknown unless a camera calibration is performed, which is a non-trivial task. Because of this eight smartphone cameras, of different models, were calibrated in search of similarities that could give ground for generalisations. To be able to determine the size of the cuboid the scale needs to be known, which is why a reference object is used. In this project a credit card is used as reference, which is placed on top of the cuboid. The four corners of the reference as well as four corners of the cuboid are used to determine the dimensions of the cuboid. Two methods, one dependent and one independent of the intrinsic parameters, are used to find the width and length, i.e. the sizes of the two dimensions that share a plane with the reference. These results are then used in another two methods to find the height of the cuboid. Errors were purposely introduced to the corners to investigate the performance of the different methods. The results show that the different methods perform very well and are all equally suitable for this type of problem. They also show that having correct reference corners is more important than having correct object corners as the results were highly dependent on the accuracy of the reference corners. Another conclusion is that the camera calibration is not necessary because different approximations of the intrinsic parameters can be used instead. / Det finns tillfällen då man undrar över storleken på ett föremål, men inte har något mätinstrument i närheten. Det är dock troligt att du har en smartphone på dig. Smartphones har oftast en integrerad digitalkamera, så tänk om du kunde ta ett foto på föremålet och få en storleksuppskattning. I det här projektet har olika metoder för att beräkna dimensionerna på ett rätblock utvärderats. En enkel Android-applikation som implementerar dessa metoder har också skapats. För att kunna göra mätningar på föremål i bilder måste vi veta hur vyn återskapas av kameran. Detta beror på kamerans egenskaper vilka kallas kameraparametrarna. Dessa parametrar kan man få fram genom att göra en kamerakalibrering, vilket inte är en trivial uppgift. Därför har åtta smartphonekameror, från olika tillverkare, kalibrerats för att se om det finns likheter mellan kamerorna som kan befoga vissa generaliseringar. För att kunna räkna ut storleken på rätblocket måste skalan vara känd och därför används ett referensobjekt. I detta projekt har ett kreditkort använts som referensobjekt. Referensen placeras ovanpå rätblocket och sedan används fyra av referensens hörn samt fyra av rätblockets hörn i beräkningarna. Två metoder, en beroende och en oberoende av kameraparametrarna, har använts för att beräkna längden och bredden, alltså längden på de två sidor som ligger i samma plan som referensobjektet. Detta resultat används sedan i ytterligare två olika metoder för att beräkna höjden på rätblocket. För att undersöka hur de olika metoderna klarade av fel manipulerades hörnen. Resultaten visar att de olika metoderna fungerar bra och är alla lika lämpliga för att lösa den här uppgiften. De visar också på att det är viktigare att referensobjektets hörn är korrekta än rätblockets hörn eftersom referensobjektets hörn hade större inverkan på resultaten. En slutsats som också kan dras är att kameraparametrarna kan approximeras och att kamerakalibrering därför inte nödvändigtvis behöver utföras.
|
370 |
Analysis and Implementation of Preconditioners for Prestressed Elasticity Problems : Advances and EnhancementsDorostkar, Ali January 2017 (has links)
In this work, prestressed elasticity problem as a model of the so-called glacial isostatic adjustment (GIA) process is studied. The model problem is described by a set of partial differential equations (PDE) and discretized with a mixed finite element (FE) formulation. In the presence of prestress the so-constructed system of equations is non-symmetric and indefinite. Moreover, the resulting system of equations is of the saddle point form. We focus on a robust and efficient block lower-triangular preconditioning method, where the lower diagonal block is and approximation of the so-called Schur complement. The Schur complement is approximated by the so-called element-wise Schur complement. The element-wise Schur complement is constructed by assembling exact local Schur complements on the cell elements and distributing the resulting local matrices to the global preconditioner matrix. We analyse the properties of the element-wise Schur complement for the symmetric indefinite system matrix and provide proof of its quality. We show that the spectral radius of the element-wise Schur complement is bounded by the exact Schur complement and that the quality of the approximation is not affected by the domain shape. The diagonal blocks of the lower-triangular preconditioner are combined with inner iterative schemes accelerated by (numerically) optimal and robust algebraic multigrid (AMG) preconditioner. We observe that on distributed memory systems, the top pivot block of the preconditioner is not scaling satisfactorily. The implementation of the methods is further studied using a general profiling tool, designed for clusters. For nonsymmetric matrices we use the theory of Generalized Locally Toeplitz (GLT) matrices and show the spectral behavior of the element-wise Schur complement, compared to the exact Schur complement. Moreover, we use the properties of the GLT matrices to construct a more efficient AMG preconditioner. Numerical experiments show that the so-constructed methods are robust and optimal.
|
Page generated in 0.1158 seconds