Spelling suggestions: "subject:"apline"" "subject:"bspline""
221 |
Um Novo Perfil Interpolante Aplicado ao Método de Volumes Finitos em Situações Une e BidimensionaisSantório, Carlos Alexandre 17 December 2002 (has links)
Made available in DSpace on 2016-12-23T14:08:11Z (GMT). No. of bitstreams: 1
Dissertacao de Mestrado - Santorio.pdf: 761767 bytes, checksum: 38b4688b7c7a1c1838e736c58f9e1ed6 (MD5)
Previous issue date: 2002-12-17 / Neste trabalho, um novo esquema de discretização, para o método de volumes finitos, denominado FLEX, foi proposto para a simulação de problemas governados por equações
diferenciais do tipo elíptico e hiperbólico. Seu desempenho foi avaliado através de problemas testes oriundos da literatura de
métodos numéricos e por testes construídos ao longo do trabalho. O novo esquema mostrou características de convergência e estabilidade compatíveis e comparáveis aos esquemas tradicionais de Diferença Central, Power Law e Flux-Spline. Sua precisão mostrou-se depender do tipo de problema físico. Problemas físicos governados por equações diferenciais parciais elípticas envolvendo convecção-difusão, que possuem uma distribuição da variável fluxo, similar àquela proposta pelo esquema FLEX, apresentaram uma solução com um nível de erro menor, em comparação com os esquemas restantes. No caso de problemas tradicionais desta classe, onde não haja a presença de tal característica específica, os resultados se mostraram intermediários. Para problemas hiperbólicos, mesmo com uma pobre discretização em termos de diferença finita para o termo transiente, o novo esquema mostrou características interessantes para a simulação deste tipo de fenômeno, no sentido de, mesmo para malhas não refinadas, convergir para a solução de referência numa taxa maior que os dois outros esquemas aqui mencionados e usados na comparação. / In this report, a new scheme of discretization for the method of finite bulks, called FLEX which was proposed for a simulation of problems ruled by differential equations as
type of elliptic and hyperbolic. Its performance was appraised through tests from the literature of numeric methods and
through tests developed for all the report. The new scheme showed features of convergence and compatible and comparable stabilities to the traditional schemes of Central Difference, Power Law and Flux-Spline . Its accuracy appeared to depend on the type de physical problem.
Physical problems ruled by differential partial elliptic equations implied convectiondiffusion which owns a distribution of the variable flux like one purpose by the FLEX scheme which showed a solution with the level of the error minor, in comparison with the remaining schemes. In the traditional problems case of this class, where there isn t the
presence with this specific featuring the results proved to be intermediaries. To hyperbolic problems even with a poor discretization within limits of finite difference to the transient term the new scheme appeared interesting features for a simulation of this kind of phenomena in the same sense to non-refined mails to converge to the solution of reference
in the rate greater than the others two schemes which were mentioned here and used in the comparison.
|
222 |
Integrated Analysis Of Genomic And Longitudinal Clinical DataJanuary 2014 (has links)
Clinico-genomic modeling refers to the statistical analysis that incorporates both clinical data such as medical test results, demographic information and genomic data such as gene expression profiles. It is an emerging research area in biomedical science and has been shown to be able to extend our understanding of complex diseases. We describe a general statistical modeling strategy for the integrated analysis of clinical and genomic data in which the clinical data are longitudinal observations. Our modeling strategy is aimed at the identification of disease-associated genes and it consists of two stages. In the first stage, we propose a hierarchical B spline model to estimate the disease severity trajectory based on the clinical variables. This disease severity trajectory is a functional summary of the disease progression. We can extract any characteristics of interest from the trajectory. In the second stage, combinations of the extracted characteristics are included in the gene-wise linear model to detect the genes that are responsible for variations in the disease progression. We illustrate our modeling approach in the context of two biomedical studies of complex diseases: tuberculosis (Tb) and colitis-associated carcinoma. The animal experimental subjects were measured longitudinally for clinical information and biological samples were extracted at the final points of the subjects to determine the gene expression profiles. Our results demonstrate that the incorporation of the longitudinal clinical data increases the value of information extracted from the expression profiles and contributes to the identification of predictive biomarkers. / acase@tulane.edu
|
223 |
A comparison of statistics for selecting smoothing parameters for loglinear presmoothing and cubic spline postsmoothing under a random groups designLiu, Chunyan 01 May 2011 (has links)
Smoothing techniques are designed to improve the accuracy of equating functions. The main purpose of this dissertation was to propose a new statistic (CS) and compare it to existing model selection strategies in selecting smoothing parameters for polynomial loglinear presmoothing (C) and cubic spline postsmoothing (S) for mixed-format tests under a random groups design. For polynomial loglinear presmoothing, CS was compared to seven existing model selection strategies in selecting the C parameters: likelihood ratio chi-square test (G2), Pearson chi-square test (PC), likelihood ratio chi-square difference test (G2diff), Pearson chi-square difference test (PCdiff), Akaike Information Criterion (AIC), Bayesian Information Criterion (BIC), and Consistent Akaike Information Criterion (CAIC). For cubic spline postsmoothing, CS was compared to the ± 1 standard error of equating (± 1 SEE) rule.
In this dissertation, both the pseudo-test data, Biology long and short, and Environmental Science long and short, and the simulated data were used to evaluate the performance of the CS statistic and the existing model selection strategies. For both types of data, sample sizes of 500, 1000, 2000, and 3000 were investigated. In addition, No Equating Needed conditions and Equating Needed conditions were investigated for the simulated data. For polynomial loglinear presmoothing, mean absolute difference (MAD), average squared bias (ASB), average squared error (ASE), and mean squared errors (MSE) were computed to evaluate the performance of all model selection strategies based on three sets of criteria: cumulative relative frequency distribution (CRFD), relative frequency distribution (RFD), and the equipercentile equating relationship. For cubic spline postsmoothing, the evaluation of different model selection procedures was only based on the MAD, ASB, ASE, and MSE of equipercentile equating.
The main findings based on the pseudo-test data and simulated data were as follows: (1) As sample sizes increased, the average C values increased and the average S values decreased for all model selection strategies. (2) For polynomial loglinear presmoothing, compared to the results without smoothing, all model selection strategies always introduced bias of RFD and significantly reduced the standard errors and mean squared errors of RFD; only AIC reduced the MSE of CRFD and MSE of equipercentile equating across all sample sizes and all test forms; the best CS procedure tended to yield an equivalent or smaller MSE of equipercentile equating than the AIC and G2diff statistics. (3) For cubic spline postsmoothing, both the ± 1 SEE rule and the CS procedure tended to perform reasonably well in reducing the ASE and MSE of equipercentile equating. (4) Among all existing model selection strategies, the ±1 SEE rule in postsmoothing tended to perform better than any of the seven existing model selection strategies in presmoothing in terms of the reduction of random error and total error; (5) pseudo-test data and the simulated data tended to yield similar results. The limitations of the study and possible future research are discussed in the dissertation.
|
224 |
A Comparative Study of American Option Valuation and ComputationRodolfo, Karl January 2007 (has links)
Doctor of Philosophy (PhD) / For many practitioners and market participants, the valuation of financial derivatives is considered of very high importance as its uses range from a risk management tool, to a speculative investment strategy or capital enhancement. A developing market requires efficient but accurate methods for valuing financial derivatives such as American options. A closed form analytical solution for American options has been very difficult to obtain due to the different boundary conditions imposed on the valuation problem. Following the method of solving the American option as a free boundary problem in the spirit of the "no-arbitrage" pricing framework of Black-Scholes, the option price and hedging parameters can be represented as an integral equation consisting of the European option value and an early exercise value dependent upon the optimal free boundary. Such methods exist in the literature and along with risk-neutral pricing methods have been implemented in practice. Yet existing methods are accurate but inefficient, or accuracy has been compensated for computational speed. A new numerical approach to the valuation of American options by cubic splines is proposed which is proven to be accurate and efficient when compared to existing option pricing methods. Further comparison is made to the behaviour of the American option's early exercise boundary with other pricing models.
|
225 |
曲線相似性之檢定 / A test for curve similarity程毓婷, Cheng, Yu Ting Unknown Date (has links)
這篇論文提出了比較兩組資料曲線在對齊後是否有相似外形的分析方法。在 functional data analysis 中,可能會有多條曲線具有相同外形但是時間轉換卻不一樣的情形。這篇論文檢定了兩組資料曲線在對齊後是否有相似外形,論文中並提出一個檢定統計量,再藉由模擬得到檢定的 p-value 和檢定力。 / This thesis proposed an analysis comparing whether the shape function for two groups of curves are similar after alignment. In functional data analysis, it is common to have curves of the same pattern but with variation in time. The common pattern can be characterized by a shape function. The problem considered in this thesis is to test whether the shape functions for two groups of curves are essentially the same. A test statistic is proposed and the p-value is obtained via simulation. Simulation results indicate that the test performs well.
|
226 |
Performance Evaluation of Perceptually Lossless Medical Image CoderChai, Shan, shan.chai@optusnet.com.au January 2007 (has links)
Medical imaging technologies offer the benefits of faster and accurate diagnosis. When the medical imaging combined with the digitization, they offer the advantage of permanent storage and fast transmission to any geographical location. However, there is a need for efficient compression algorithms that alleviate the taxing burden of both large storage space and transmission bandwidth requirements. The Perceptually Lossless Medical Image Coder is a new image compression technique. It provides a solution to challenge of delivering clinically critical information in the shortest time possible. It embeds the visual pruning into the JPEG 2000 coding framework to achieve the optimal compression without losing the visual integrity of medical images. However, the performance of the PLMIC under certain medical image operation is still unknown. In this thesis, we investigate the performance of the PLMIC by applying linear, quadratic and cubic standard and centered B-spline interpolation filters. In order to evaluate the visual performance, a subjective assessment consisting of 30 medical images and 6 image processing experts was conducted. The perceptually lossless medical image coder was compared to the state-of-the-art JPEG-LS compliant LOCO and NLOCO image coders. The results have shown overall, there were no perceivable differences of statistical significance when the medical images were enlarged by a factor of 2. The findings of the thesis may help the researchers to further improve the coder. Additionally, it may also notify the radiologists the performance of the PLMIC coder to help them with correct diagnosis.
|
227 |
Modélisation de surfaces à l'aide de fonctions splines :Tazeroualti, Mahammed 26 February 1993 (has links) (PDF)
Ce travail se décompose en trois parties distinctes. Dans la première partie, on introduit un algorithme du type Gauss-Seidel pour la minimisation de fonctionnelles symétriques semi-définies positives. La convergence de cet algorithme est démontrée. En application, on donne deux méthodes de lissage de surfaces. Ces méthodes sont basées sur l'idée de ramener un probleme de lissage a deux dimensions a la resolution d'une suite de problèmes a une dimension faciles a résoudre. Pour cela on utilise l'opération d'inf-convolution spline. Dans la deuxième partie, on introduit une nouvelle methode pour la conception d'un verre progressif. Ce verre est représente par une surface suffisamment régulière, a laquelle on impose des conditions sur ses courbures principales dans certaines zones (zone de vision de loin et zone de vision de pres), et des conditions sur ses directions principales de courbure dans d'autres zones (zone nasale et zone temporale). La surface est écrite sous forme de produit tensoriel de b-splines de degré quatre. Pour la calculer, on est amené a minimiser un opérateur non quadratique. Cette minimisation est alors effectuée par un procédé itératif dont on a teste numériquement la convergence rapide
|
228 |
Fonctions splines avec conditions de formeMedina, Julio 09 October 1985 (has links) (PDF)
On étudie le problème de l'interpolation et du lissage de données avec conditions de forme (positivité, monotonie, convexité) dans le plan réel. Pour sa résolution on utilise cinq classes de méthodes :1) splines polynomiales, 2) splines rationnelles, 3) splines sous tension, 4) programmation mathématique, 5) splines avec repoussoir
|
229 |
Application de l'inf-convolution spline au traitement des chromatogrammes de gasoilsValera Garcia, Daniel 30 October 1984 (has links) (PDF)
On expose deux logiciels de traitement de chromatogrammes de gas oils qui permettent l'évaluation des richesses de composants particuliers: les n-paraffines. Le premier permet par des recalages par «moindres carrés» d'estimer ces richesses à partir de deux chromatogrammes: celui du gas oil, mais aussi celui de ce même gas oil, sans les n-paraffines. Le deuxième ne nécessite plus que le seul chromatogramme du gas oil: on remplace l'information manquante par la connaissance théorique acquise sur la forme des n-paraffines. On procède en deux étapes: 1) application de la théorie de l'inf-convolution spline, en vue de séparer au mieux, par un profil de n-paraffine normalisé celle-ci du reste du gas oil; 2) application des méthodes de minimisation à plusieurs variables pour choisir, parmi les formes possibles pour une n-paraffines la forme optimale
|
230 |
Fonctions-spline homogènes à plusieurs variablesDuchon, Jean 08 February 1980 (has links) (PDF)
On présente certains outils mathématiques pour l'étude des fonctions Spline à plusieurs variables.
|
Page generated in 0.0398 seconds