• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 228
  • 129
  • 56
  • 23
  • 14
  • 6
  • 4
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • Tagged with
  • 516
  • 92
  • 80
  • 80
  • 68
  • 64
  • 56
  • 47
  • 47
  • 43
  • 39
  • 38
  • 36
  • 35
  • 32
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
231

Novel Methods for Multidimensional Image Segmentation

Pichon, Eric 03 November 2005 (has links)
Artificial vision is the problem of creating systems capable of processing visual information. A fundamental sub-problem of artificial vision is image segmentation, the problem of detecting a structure from a digital image. Examples of segmentation problems include the detection of a road from an aerial photograph or the determination of the boundaries of the brain's ventricles from medical imagery. The extraction of structures allows for subsequent higher-level cognitive tasks. One of them is shape comparison. For example, if the brain ventricles of a patient are segmented, can their shapes be used for diagnosis? That is to say, do the shapes of the extracted ventricles resemble more those of healthy patients or those of patients suffering from schizophrenia? This thesis deals with the problem of image segmentation and shape comparison in the mathematical framework of partial differential equations. The contribution of this thesis is threefold: 1. A technique for the segmentation of regions is proposed. A cost functional is defined for regions based on a non-parametric functional of the distribution of image intensities inside the region. This cost is constructed to favor regions that are homogeneous. Regions that are optimal with respect to that cost can be determined with limited user interaction. 2. The use of direction information is introduced for the segmentation of open curves and closed surfaces. A cost functional is defined for structures (curves or surfaces) by integrating a local, direction-dependent pattern detector along the structure. Optimal structures, corresponding to the best match with the pattern detector, can be determined using efficient algorithms. 3. A technique for shape comparison based on the Laplace equation is proposed. Given two surfaces, one-to-one correspondences are determined that allow for the characterization of local and global similarity measures. The local differences among shapes (resulting for example from a segmentation step) can be visualized for qualitative evaluation by a human expert. It can also be used for classifying shapes into, for example, normal and pathological classes.
232

Voxel-based Cortical Thickness Measurement of Human Brain Using Magnetic Resonance Imaging

Chen, Wen-Fu 14 February 2012 (has links)
Cerebral cortex, classified as gray matter, is the superficial layer of the cerebrum. In recent years, many studies have shown the abnormality of cortical thickness is possibly correlated to the disease or disorder in central nervous system, such as Alzheimer¡¦s disease and lissencephaly. Therefore, this purpose of this work is to implement the measurement of the cortical thickness. In general, two approaches, surface-based and voxel-based methods, have been proposed to measure the cortical thickness. In this thesis, a procedure of the voxel-based method using Laplace¡¦s equation was developed on the basis of a 2008 publication reported by Chloe Hutton et al to obtain voxel-based cortical thickness (VBCT) map. The result of our home-made program was further compared with those calculated by Hutton¡¦s program, whic h was generously provided by the author. The difference between two implementations was consisted of four main parts. First of all, different strategies of the tissue classification were used to define boundary condition of Laplace¡¦s equation. When grey matter, white matter, and cerebrospinal fluid were classified by maximizing the tissue probability, Hutton¡¦s program tends to search more voxels of cerebrospinal fluid in sulci by skeletonizing the non-parenchyma area. Second, the algorithm of layer growing also differs. The single layer obtained by the 26-neighborhood algorithm in our program would be obviously thicker than that provided by Hutton¡¦s program using 6-neighborhood. Third, compared with a fixed step size (usually 0.5 mm) porposed in the main reference to track cortical streamline, we designed a variable step size, reducing the underestimation of cortical thickness. The last but not the least, the connecting points of the cortical streamline usually are not grid points, thus requiring interpolation to estimate the stepping gradient. We adapted the linear interpolation for better accuracy when Hutton et al searched for the closest grid point for replacement to achieve faster computation.
233

Semi-Analytical Solutions of One-Dimensional Multispecies Reactive Transport in a Permeable Reactive Barrier-Aquifer System

Mieles, John Michael 2011 May 1900 (has links)
At many sites it has become apparent that most chemicals of concern (COCs) in groundwater are persistent and not effectively treated by conventional remediation methods. In recent years, the permeable reactive barrier (PRB) technology has proven to be more cost-efficient in the long-run and capable of rapidly reducing COC concentrations by up to several orders of magnitude. In its simplest form, the PRB is a vertically emplaced rectangular porous medium in which impacted groundwater passively enters a narrow treatment zone. In the treatment zone dissolved COCs are rapidly degraded as they come in contact with the reactive material. As a result, the effluent groundwater contains significantly lower solute concentrations as it re-enters the aquifer and flows towards the plane of compliance (POC). Effective implementation of the PRB relies on accurate site characterization to identify the existing COCs, their interactions, and their required residence time in the PRB and aquifer. Ensuring adequate residence time in the PRB-aquifer system allows COCs to react longer, hence improving the probability that regulatory concentrations are achieved at the POC. In this study, the Park and Zhan solution technique is used to derive steady-state analytical and transient semi-analytical solutions to multispecies reactive transport in a permeable reactive barrier-aquifer (dual domain) system. The advantage of the dual domain model is that it can account for the potential existence of natural degradation in the aquifer, when designing the required PRB thickness. Also, like the single-species Park and Zhan solution, the solutions presented here were derived using the total mass flux (third-type) boundary condition in PRB-aquifer system. The study focuses primarily on the steady-state analytical solutions of the tetrachloroethylene (PCE) serial degradation pathway and secondly on the analytical solutions of the parallel degradation pathway. Lastly, the solutions in this study are not restricted solely to the PRB-aquifer model. They can also be applied to other types of dual domain systems with distinct flow and transport properties, and up to four other species reacting in serial or parallel degradation pathways. Although the solutions are long, the results of this study are novel in that the solutions provide improved modeling flexibility. For example: 1) every species can have unique first-order reaction rates and unique retardation factors, 2) higher order daughter species can be modeled solely as byproducts by neglecting their input concentrations, 3) entire segments of the parallel degradation pathway can be neglected depending on the desired degradation pathway model, and 4) converging multi-parent reactions can be modeled. As part of the study, separate Excel spreadsheet programs were created to facilitate prompt application of the steady-state analytical solutions, for both the serial and parallel degradation pathways. The spreadsheet programs are included as supplementary material.
234

Testing Lack-of-Fit of Generalized Linear Models via Laplace Approximation

Glab, Daniel Laurence 2011 May 1900 (has links)
In this study we develop a new method for testing the null hypothesis that the predictor function in a canonical link regression model has a prescribed linear form. The class of models, which we will refer to as canonical link regression models, constitutes arguably the most important subclass of generalized linear models and includes several of the most popular generalized linear models. In addition to the primary contribution of this study, we will revisit several other tests in the existing literature. The common feature among the proposed test, as well as the existing tests, is that they are all based on orthogonal series estimators and used to detect departures from a null model. Our proposal for a new lack-of-fit test is inspired by the recent contribution of Hart and is based on a Laplace approximation to the posterior probability of the null hypothesis. Despite having a Bayesian construction, the resulting statistic is implemented in a frequentist fashion. The formulation of the statistic is based on characterizing departures from the predictor function in terms of Fourier coefficients, and subsequent testing that all of these coefficients are 0. The resulting test statistic can be characterized as a weighted sum of exponentiated squared Fourier coefficient estimators, whereas the weights depend on user-specified prior probabilities. The prior probabilities provide the investigator the flexibility to examine specific departures from the prescribed model. Alternatively, the use of noninformative priors produces a new omnibus lack-of-fit statistic. We present a thorough numerical study of the proposed test and the various existing orthogonal series-based tests in the context of the logistic regression model. Simulation studies demonstrate that the test statistics under consideration possess desirable power properties against alternatives that have been identified in the existing literature as being important.
235

Solution Of One Dimensional Transient Flow In Composite Aquifers Using Stehfest Algorithm

Bakar, Urun 01 September 2010 (has links) (PDF)
In this study, piezometric heads in a composite aquifer composed of an alluvial deposit having a width adjacent to a semi-infinite fractured rock are determined. One dimensional transient flow induced by a constant discharge pumping rate from a stream intersecting alluvial part of the aquifer is considered. Parts of the aquifer are homogeneous andisotropic. Equations of flow, initial and boundary conditions are converted to dimensionless forms for graphical presentation and the interpretation of results independent of discharge and head inputs specific to the problem. Equations are solved first in Laplace domain and Laplace domain solutions are inverted numerically to real time domain by utilizing Stehfest algorithm.For this purpose, a set of subroutines in VBA Excel are developed. This procedure is verified by application of code to flow in semi-infinite homogeneous aquifer under constant discharge for which analytical solution is available in literature. VBA codes are also developed for two special cases of finite aquifer with impervious and with recharge boundary on the right hand side. Results of composite aquifer solutions with extreme tranmissivity values are compared with these two cases for verification of methodology and sensivity of results.
236

Convergence Transition of BAM on Laplace BVP with Singularities

Lin, Guan-yu 30 June 2009 (has links)
Boundary approximation method, also known as the collocation Trefftz method in engineering, is used to solve Laplace boundary value problem on rectanglular domain. Suppose the particular solutions are chosen for the whole domain. If there is no singularity on other vertices, it should have exponential convergence. Otherwise, it will degenerate to polynomial convergence. In the latter case, the order of convergence has some relation with the intensity of singularity. So, it is easy to design models with desired convergent orders. On a sectorial domain, when one side of the boundary conditions is a transcendental function, it needs to be approximated by power series. The truncation of this power series will generate an artificial singularity when solving Laplace equation on polygon. So it will greatly slow down the expected order of convergence. This thesis study how the truncation error affects the convergent speed. Moreover, we focus on the transition behavior of the convergence from one order to another. In the end, we also apply our results to boundary approximation method with enriched basis.
237

Etude asymptotique des algorithmes stochastiques et calcul des prix des options Parisiennes

Lelong, Jérôme 14 September 2007 (has links) (PDF)
La première partie de cette thèse est consacrée à l'étude des algorithmes stochastiques aléatoirement tronqués de Chen et Zhu. La première étude de cet algorithme concerne sa convergence presque sûre. Dans le second chapitre, nous poursuivons l'étude de cet algorithme en nous intéressant à sa vitesse de convergence. Nous considérons également une version moyenne mobile de cet algorithme. Enfin nous terminons par quelques applications à la finance.<br />La seconde partie de cette thèse s'intéresse à l'évaluation des options parisiennes en s'appuyant sur les travaux de Chesney, Jeanblanc et Yor. La méthode d'évaluation se base sur l'obtention de formules fermées pour les transformées de Laplace des prix par rapport à la maturité. Nous établissons ces formules pour les options parisiennes simple et double barrières. Nous étudions ensuite une méthode d'inversion numérique de ces transformées dont nous établissons la précision.
238

Méthodes asymptotiques pour le calcul des champs électromagnétiques dans des milieux à couches minces.<br />Application aux cellules biologiques.

Poignard, Clair 23 November 2006 (has links) (PDF)
Dans cette thèse, nous présentons des méthodes asymptotiques <br />mathématiquement justifiées permettant de connaître les champs <br />électromagnétiques dans des milieux à couches minces hétérogènes. <br />La motivation de ce travail est le calcul du champ électrique dans des <br />cellules biologiques composées d'un cytoplasme conducteur entouré <br />d'une fine membrane très isolante. <br />Nous remplaçons la membrane, lorsque son épaisseur est infiniment <br />petite, par des conditions de transmission ou des conditions aux <br />limites appropriées et nous estimons l'erreur commise par ces <br />approximations.<br /> Pour les basses fréquences, nous considérons l'équation quasistatique<br />donnant le potentiel dont dérive le champ. A l'aide d'un <br />calcul en géométrie circulaire nous obtenons les expressions explicites<br /> du potentiel et nous en déduisons les asymptotiques du champ <br />électrique, en fonction de l'épaisseur de la couche mince, avec des <br />estimations de l'erreur. Nous estimons ensuite la différence entre le <br />champ réel et le champ statique. Puis nous généralisons notre <br />développement asymptotique à une géométrie quelconque. <br /> La deuxième partie de cette thèse traite des moyennes fréquences : <br />nous donnons le développement asymptotique de la solution de <br />l'équation de Helmholtz lorsque l'épaisseur de la membrane tend vers <br />0. Tous ces précédents résultats sont illustrés par des calculs par <br />éléments finis.<br /> Enfin, pour les hautes fréquences, nous construisons une condition <br />d'impédance pseudodifférentielle permettant de concentrer l'effet de <br />la couche sur son bord intérieur. Nous concluons cette thèse par un <br />problème de diffraction à haute fréquence d'une onde incidente par <br />un disque de petite taille. A l'aide d'une analyse pseudodifférentielle, <br />nous bornons la norme de la trace du champ diffracté à distance fixe <br />de l'inhomogénéité en fonction de la taille de l'objet et de l'onde <br />incidente.
239

Clément-type interpolation on spherical domains - interpolation error estimates and application to a posteriori error estimation

Apel, Thomas, Pester, Cornelia 31 August 2006 (has links) (PDF)
In this paper, a mixed boundary value problem for the Laplace-Beltrami operator is considered for spherical domains in $R^3$, i.e. for domains on the unit sphere. These domains are parametrized by spherical coordinates (\varphi, \theta), such that functions on the unit sphere are considered as functions in these coordinates. Careful investigation leads to the introduction of a proper finite element space corresponding to an isotropic triangulation of the underlying domain on the unit sphere. Error estimates are proven for a Clément-type interpolation operator, where appropriate, weighted norms are used. The estimates are applied to the deduction of a reliable and efficient residual error estimator for the Laplace-Beltrami operator.
240

Study of laplace and related probability distributions and their applications

Aryal, Gokarna Raj 01 June 2006 (has links)
The aim of the present study is to investigate a probability distribution that can be derived from the laplace probability distribution and can be used to model various real world problems. In the last few decades, there has been a growing interest in the construction of flexible parametric classes of probability distributions. Various forms of the skewed and kurtotic distributions have appeared in the literature for data analysis and modeling. In particular, various forms of the skew laplace distribution have been introduced and applied in several areas including medical science, environmental science, communications, economics, engineering and finance, among others. In the present study we will investigate the skew laplace distribution based on the definition of skewed distributions introduced by O'Hagan and extensively studied by Azzalini. A random variable X is said to have the skew-symmetric distribution if its probability density function is f(x) = 2g(x)G(lambda x), where g and G are the probability density function and the cumulative distribution function of a symmetric distribution around 0 respectively and lambda is the skewness parameter. We will investigate the mathematical properties of this distribution and apply it to real applications. In particular, we will consider the exchange rate data for six different currencies namely, Australian Dollar,Canadian Dollar, European Euro, Japanese Yen, Switzerland Franc and United Kingdom Pound versus United States Dollar. To describe a life phenomenon we will be mostly interested when the random variableis positive. Thus, we will consider the case when the skew Laplace pdf is truncated to the left at 0 and we will study its mathematical properties. Comparisons with other life time distributions will be presented. In particular we will compare the truncated skew laplace (TSL) distribution with the two parameter Gamma probability distribution with simulated and real data with respect to its reliability behavior. We also study the hypoexponential pdf and compare it with the TSL distribution. Since the TSL pdf has increasing failure rate (IFR) we will investigate a possible application in system maintenance. In particular we study the problem related to the preventive maintenance.

Page generated in 0.0564 seconds