• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 100
  • 66
  • 47
  • 19
  • 8
  • 6
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 312
  • 81
  • 43
  • 40
  • 36
  • 32
  • 32
  • 32
  • 32
  • 31
  • 29
  • 27
  • 26
  • 25
  • 25
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
151

Adaptive Spline-based Finite Element Method with Application to Phase-field Models of Biomembranes

Jiang, Wen January 2015 (has links)
<p>Interfaces play a dominant role in governing the response of many biological systems and they pose many challenges to traditional finite element. For sharp-interface model, traditional finite element methods necessitate the finite element mesh to align with surfaces of discontinuities. Diffuse-interface model replaces the sharp interface with continuous variations of an order parameter resulting in significant computational effort. To overcome these difficulties, we focus on developing a computationally efficient spline-based finite element method for interface problems.</p><p>A key challenge while employing B-spline basis functions in finite-element methods is the robust imposition of Dirichlet boundary conditions. We begin by examining weak enforcement of such conditions for B-spline basis functions, with application to both second- and fourth-order problems based on Nitsche's approach. The use of spline-based finite elements is further examined along with a Nitsche technique for enforcing constraints on an embedded interface. We show that how the choice of weights and stabilization parameters in the Nitsche consistency terms has a great influence on the accuracy and robustness of the method. In the presence of curved interface, to obtain optimal rates of convergence we employ a hierarchical local refinement approach to improve the geometrical representation of interface. </p><p>In multiple dimensions, a spline basis is obtained as a tensor product of the one-dimensional basis. This necessitates a rectangular grid that cannot be refined locally in regions of embedded interfaces. To address this issue, we develop an adaptive spline-based finite element method that employs hierarchical refinement and coarsening techniques. The process of refinement and coarsening guarantees linear independence and remains the regularity of the basis functions. We further propose an efficient data transfer algorithm during both refinement and coarsening which yields to accurate results.</p><p>The adaptive approach is applied to vesicle modeling which allows three-dimensional simulation to proceed efficiently. In this work, we employ a continuum approach to model the evolution of microdomains on the surface of Giant Unilamellar Vesicles. The chemical energy is described by a Cahn-Hilliard type density functional that characterizes the line energy between domains of different species. The generalized Canham-Helfrich-Evans model provides a description of the mechanical energy of the vesicle membrane. This coupled model is cast in a diffuse-interface form using the phase-field framework. The effect of coupling is seen through several numerical examples of domain formation coupled to vesicle shape changes.</p> / Dissertation
152

Numerical Studies Of The Electronic Properties Of Low Dimensional Semiconductor Heterostructures

Dikmen, Bora 01 September 2004 (has links) (PDF)
An efficient numerical method for solving Schr&ouml / dinger&#039 / s and Poisson&#039 / s equations using a basis set of cubic B-splines is investigated. The method is applied to find both the wave functions and the corresponding eigenenergies of low-dimensional semiconductor structures. The computational efficiency of the method is explicitly shown by the multiresolution analysis, non-uniform grid construction and imposed boundary conditions by applying it to well-known single electron potentials. The method compares well with the results of analytical solutions and of the finite difference method.
153

Bundle block adjustment using 3D natural cubic splines

Lee, Won Hee. January 2008 (has links)
Thesis (Ph. D.)--Ohio State University, 2008. / Title from first page of PDF file. Includes bibliographical references (p. 113-119).
154

Μέθοδοι μη παραμετρικής παλινδρόμησης

Βαρελάς, Γεώργιος 08 July 2011 (has links)
Ένα πράγμα που θέτει τους στατιστικολόγους πέρα από άλλους επιστήμονες είναι σχετική άγνοια του κοινού γενικά σχετικά με το τι είναι στην πραγματικότητα το πεδίο της στατιστικής. Ο κόσμος έχει μια μικρή γενική ιδέα του τι είναι η χημεία ή η βιολογία — αλλά τι είναι αυτό ακριβώς που κάνουν οι στατιστικολόγοι; Μία απάντηση στο ερώτημα αυτό έχει ως εξής: στατιστική είναι η επιστήμη που ασχολείται με τη συλλογή, περιληπτική παρουσίαση της πληροφορίας, παρουσίαση και ερμηνεία των δεδομένων. Τα δεδομένα είναι το κλειδί, φυσικά — τα πράγματα από τα οποία εμείς αποκτούμε γνώσεις και βγάζουμε αποφάσεις. Ένας πίνακας δεδομένων παρουσιάζει μια συλλογή έγκυρων δεδομένων, αλλά είναι σαφές ότι είναι εντελώς ανεπαρκής για την σύνοψη ή την ερμηνεία τους.Το πρόβλημα είναι ότι δεν έγιναν παραδοχές σχετικά με τη διαδικασία που δημιούργησε αυτά τα δεδομένα (πιο απλά, η ανάλυση είναι καθαρά μη παραμετρική, υπό την έννοια ότι δεν επιβάλλεται καμία τυπική δομή για τα δεδομένα). Επομένως, καμία πραγματική περίληψη ή σύνοψη δεν είναι δυνατή. Η κλασική προσέγγιση σε αυτή τη δυσκολία είναι να υποθέσουμε ένα παραμετρικό μοντέλο για την υποκείμενη διαδικασία, καθορίζοντας μια συγκεκριμένη φόρμα για την υποκείμενη πυκνότητα. Στη συνέχεια, μπορούν να υπολογιστούν διάφορα στατιστικά στοιχεία και μπορούν να παρουσιαστούν μέσω μιας προσαρμοσμένης πυκνότητας.Δυστυχώς, η ισχύς της παραμετρικής μοντελοποίησης είναι επίσης η αδυναμία της. Συνδέοντας ένα συγκεκριμένο μοντέλο, μπορούμε να έχουμε μεγάλα οφέλη, αλλά μόνο εάν το πρότυπο θεωρείται ότι ισχύει (τουλάχιστον κατά προσέγγιση). Εάν το υποτιθέμενο μοντέλο δεν είναι σωστό, οι αποφάσεις που θα αντλήσουμε από αυτό μπορεί να είναι χειρότερες από άχρηστες, οδηγώντας μας σε παραπλανητικές ερμηνείες των δεδομένων. / A thing that places the statisticians beyond other scientists is relative ignorance of public as generally speaking with regard to what it is in reality the field of statistics. The world does have a small general idea what is chemistry or biology - but what is precisely that statisticians do? An answer in this question has as follows: statistics is the science that deals with the collection, general presentation of information, presentation and interpretation of data. The data are the key, from which we acquire knowledge and make decisions. A table of data presents a collection of valid data, but it is obvious that it is completely insufficient for their synopsis or their interpretation. The problem is that no assumptions have been made about the process that created these data (more simply, the analysis is no parametric, under the significance that is no formal structure is imposed on the data). Consequently, no real summary or synopsis is possible. The classical approach in this difficulty is to assume a parametric model for the underlying process, determining a concrete form for the underlying density. Afterwards, can be calculated various statistical elements and a fitted density can manifest itself. The power of parametric modelling is also its weakness. By linking inference to a specific model, we can have big profits, but only if the model is true. If the assumed model is not correct, the decisions that we will draw from this can be worse than useless, leading us to misleading interpretations of data.
155

Cardinal spline wavelet decomposition based on quasi-interpolation and local projection

Ahiati, Veroncia Sitsofe 03 1900 (has links)
Thesis (MSc (Mathematics))--University of Stellenbosch, 2009. / Wavelet decomposition techniques have grown over the last two decades into a powerful tool in signal analysis. Similarly, spline functions have enjoyed a sustained high popularity in the approximation of data. In this thesis, we study the cardinal B-spline wavelet construction procedure based on quasiinterpolation and local linear projection, before specialising to the cubic B-spline on a bounded interval. First, we present some fundamental results on cardinal B-splines, which are piecewise polynomials with uniformly spaced breakpoints at the dyadic points Z/2r, for r ∈ Z. We start our wavelet decomposition method with a quasi-interpolation operator Qm,r mapping, for every integer r, real-valued functions on R into Sr m where Sr m is the space of cardinal splines of order m, such that the polynomial reproduction property Qm,rp = p, p ∈ m−1, r ∈ Z is satisfied. We then give the explicit construction of Qm,r. We next introduce, in Chapter 3, a local linear projection operator sequence {Pm,r : r ∈ Z}, with Pm,r : Sr+1 m → Sr m , r ∈ Z, in terms of a Laurent polynomial m solution of minimally length which satisfies a certain Bezout identity based on the refinement mask symbol Am, which we give explicitly. With such a linear projection operator sequence, we define, in Chapter 4, the error space sequence Wr m = {f − Pm,rf : f ∈ Sr+1 m }. We then show by solving a certain Bezout identity that there exists a finitely supported function m ∈ S1 m such that, for every r ∈ Z, the integer shift sequence { m(2 · −j)} spans the linear space Wr m . According to our definition, we then call m the mth order cardinal B-spline wavelet. The wavelet decomposition algorithm based on the quasi-interpolation operator Qm,r, the local linear projection operator Pm,r, and the wavelet m, is then based on finite sequences, and is shown to possess, for a given signal f, the essential property of yielding relatively small wavelet coefficients in regions where the support interval of m(2r · −j) overlaps with a Cm-smooth region of f. Finally, in Chapter 5, we explicitly construct minimally supported cubic B-spline wavelets on a bounded interval [0, n]. We also develop a corresponding explicit decomposition algorithm for a signal f on a bounded interval. ii Throughout Chapters 2 to 5, numerical examples are provided to graphically illustrate the theoretical results.
156

Multivariate refinable functions with emphasis on box splines

Van der Bijl, Rinske 03 1900 (has links)
Thesis (MComm (Mathematics))--Stellenbosch University, 2008. / The general purpose of this thesis is the analysis of multivariate refinement equations, with focus on the bivariate case. Since box splines are the main prototype of such equations (just like the cardinal B-splines in the univariate case), we make them our primary subject of discussion throughout. The first two chapters are indeed about the origin and definition of box splines, and try to elaborate on them in sufficient detail so as to build on them in all subsequent chapters, while providing many examples and graphical illustrations to make precise every aspect regarding box splines that will be mentioned. Multivariate refinement equations are ones that take on the form (x) =Xi2Zn pi (Mx − i), (1) where is a real-valued function, called a refinable function, on Rn, p = {pi}i2Zn is a sequence of real numbers, called a refinement mask, and M is an n × n matrix with integer entries, called a dilation matrix. It is important to note that any such equation is thus simultaneously determined by all three of , p and M — and the thesis will try and explain what role each of these plays in a refinement equation. In Chapter 3 we discuss the definition of refinement equations in more detail and elaborate on box splines as our first examples of refinable functions, also showing that one can actually use them to create even more such functions. Also observing from Chapter iii iv 2 that box splines demand yet another parameter from us, namely an initial direction matrix D, we focus on the more general instances of these in Chapter 4, while keeping the dilation matrix M fixed. Chapter 5 then in turn deals with the matrix M and tries to generalize some of the results found in Chapter 3 accordingly, keeping the initial direction matrix fixed. Having dealt with the refinement equation itself, we subsequently focus our attention on the support of a (bivariate) refinable function — that is, the part of the xy-grid on which such a function “lives” — and that of a refinement mask, in Chapter 6, and obtain a few results that are in a sense introductory to our work in the next chapter. Next, we move on to discuss one area in which refinable functions are especially applicable, namely subdivision, which is analyzed in Chapter 7. After giving the basic definitions of subdivision and subdivision convergence, and investigating the “sum rules” in Section 7.1, we prove our main subdivision convergence result in Section 7.2. The chapter is concluded with some examples in Section 7.3. The thesis is concluded, in Chapter 8, with a number of remarks on what has been done and issues that are left for future research.
157

Estimating HIV incidence from multiple sources of data

Brizzi, Francesco January 2018 (has links)
This thesis develops novel statistical methodology for estimating the incidence and the prevalence of Human Immunodeficiency Virus (HIV) using routinely collected surveillance data. The robust estimation of HIV incidence and prevalence is crucial to correctly evaluate the effectiveness of targeted public health interventions and to accurately predict the HIV- related burden imposed on healthcare services. Bayesian CD4-based multi-state back-calculation methods are a key tool for monitoring the HIV epidemic, providing estimates of HIV incidence and diagnosis rates by disentangling their competing contribution to the observed surveillance data. Improving the effectiveness of public health interventions, requires targeting specific age-groups at high risk of infection; however, existing methods are limited in that they do not allow for such subgroups to be identified. Therefore the methodological focus of this thesis lies in developing a rigorous statistical framework for age-dependent back-calculation in order to achieve the joint estimation of age-and-time dependent HIV incidence and diagnosis rates. Key challenges we specifically addressed include ensuring the computational feasibility of proposed methods, an issue that has previously hindered extensions of back-calculation, and achieving the joint modelling of time-and-age specific incidence. The suitability of non-parametric bivariate smoothing methods for modelling the age-and-time specific incidence has been investigated in detail within comprehensive simulation studies. Furthermore, in order to enhance the generalisability of the proposed model, we developed back-calculation that can admit surveillance data less rich in detail; these handle surveillance data collected from an intermediate point of the epidemic, or only available on a coarse scale, and concern both age-dependent and age-independent back-calculation. The applicability of the proposed methods is illustrated using routinely collected surveillance data from England and Wales, for the HIV epidemic among men who have sex with men (MSM).
158

[en] OBJECT-BASED MODELING OF TURBIDITE LOBES USING SINGLE-VALUED B-SPLINES / [pt] MODELAGEM BASEADA EM OBJETOS DE LOBOS TURBIDÍTICOS USANDO B-SPLINES UNIVALORADOS

YULIETH ALZATE CARDONA 13 March 2017 (has links)
[pt] As correntes de turbidez são fluxos gravitacionais que têm uma densidade mais elevada do que o seu entorno, sendo caracterizadas por terem uma aparência turbulenta e por moverem-se com uma alta velocidade, realizando um processo de transferência de sedimentos. O problema abordado nesta tese é a modelagem de depósitos turbidíticos baseada em um modelo deposicional com três lobos turbidíticos. A principal contribuição foi desenvolver um modelo baseado em objectos usando B-Splines univalorados para simular reservatórios de turbiditos em um grade cartesiana regular. / [en] Turbidity currents are gravitational flows that have higher density than its surroundings, and they are characterized by having a turbulent appearance and by moving at high speed carrying out a transfer process sediment. The problem addressed in this thesis is the modelling of turbidities deposits. It will be taken based on a depositional model that contains three turbidities lobes. Our contribution is to develop a object-based model using Single-valued B-Spline to simulate turbidities reservoirs in a regular Cartesian grid.
159

Engenharia de Avaliações com Base em Modelos Gamlss

de Araújo Florencio, Lutemberg 31 January 2010 (has links)
Made available in DSpace on 2014-06-12T18:03:00Z (GMT). No. of bitstreams: 2 arquivo616_1.pdf: 1677661 bytes, checksum: 81a4940c320e84bf6b3a9d8d0a269224 (MD5) license.txt: 1748 bytes, checksum: 8a4605be74aa9ea9d79846c1fba20a33 (MD5) Previous issue date: 2010 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / A determinação técnica do valor de um bem imóvel (casas, terrenos, entre outros) é de extrema importância para a tomada de decisão em diversos segmentos da sociedade e em muitos órgãos governamentais e privados. Cabe à Engenharia de Avaliações, enquanto ciência do valor, coletar, tratar e analisar dados e estimar modelos que expliquem, de maneira satisfatória, a variabilidade observada nos preços, no mercado em que se estuda. Entretanto, não-normalidade, heteroscedasticidade e heterogeneidade espacial e estrutural são bastante comuns em dados imobiliários, razão pela qual o uso de modelos tradicionais, como o modelo normal de regressão linear clássico (CNLRM) e os modelos lineares generalizados (GLM), pode sofrer limitações. Diante disto e com base numa amostra de 2109 observações de terrenos urbanos situados na cidade de Aracaju-SE, relativas aos anos de 2005, 2006 e 2007, estimamos a função de preços hedônicos mediante uso da classe de modelos de regressão proposta por Rigby & Stasinopoulos (2005), denominada de modelos aditivos generalizados para posição, escala e forma (GAMLSS), a qual permite o ajuste de uma ampla família de distribuições para a variável resposta e possibilita a modelagem direta, utilizando funções paramétricas e/ou não-paramétricas, da estrutura de regressão da variável de interesse. Neste sentido, a presente dissertação descreve e caracteriza os modelos GAMLSS, bem como compara os ajustes realizados entre os modelos estimados via CNLRM, GLM e GAMLSS para o mesmo conjunto de dados. Na análise empírica consideramos como variável resposta o preço unitário do terreno e como variáveis independentes as características estruturais, locacionais e econômicas inerentes ao imóvel. Devido à flexibilidade da estrutura de regressão GAMLSS, modelamos de forma não-paramétrica (utilizando suavizadores splines) algumas covariáveis (por exemplo, as coordenadas geográficas referentes à localização do terreno), assim como modelamos os parâmetros de posição (&#956;) e escala (&#963;) da variável resposta. Os resultados obtidos mostraram que os modelos GAMLSS forneceram um ajuste superior àqueles obtidos via CNLRM e GLM, segundo as análises gráficas e numéricas dos resíduos e os critérios de Akaike e Schwarz, indicando que a classe de modelos GAMLSS aparenta ser mais apropriada para a estimação dos parâmetros da função de preços hedônicos
160

Spline-based sieve semiparametric generalized estimating equation for panel count data

Hua, Lei 01 May 2010 (has links)
In this thesis, we propose to analyze panel count data using a spline-based sieve generalized estimating equation method with a semiparametric proportional mean model E(N(t)|Z) = Λ0(t) eβT0Z. The natural log of the baseline mean function, logΛ0(t), is approximated by a monotone cubic B-spline function. The estimates of regression parameters and spline coefficients are the roots of the spline based sieve generalized estimating equations (sieve GEE). The proposed method avoids assumingany parametric structure of the baseline mean function and the underlying counting process. Selection of an appropriate covariance matrix that represents the true correlation between the cumulative counts improves estimating efficiency. In addition to the parameters existing in the proportional mean function, the estimation that accounts for the over-dispersion and autocorrelation involves an extra nuisance parameter σ2, which could be estimated using a method of moment proposed by Zeger (1988). The parameters in the mean function are then estimated by solving the pseudo generalized estimating equation with σ2 replaced by its estimate, σ2n. We show that the estimate of (β0,Λ0) based on this two-stage approach is still consistent and could converge at the optimal convergence rate in the nonparametric/semiparametric regression setting. The asymptotic normality of the estimate of β0 is also established. We further propose a spline-based projection variance estimating method and show its consistency. Simulation studies are conducted to investigate finite sample performance of the sieve semiparametric GEE estimates, as well as different variance estimating methods with different sample sizes. The covariance matrix that accounts for the overdispersion generally increases estimating efficiency when overdispersion is present in the data. Finally, the proposed method with different covariance matrices is applied to a real data from a bladder tumor clinical trial.

Page generated in 0.0619 seconds