• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 14
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 22
  • 8
  • 6
  • 5
  • 5
  • 5
  • 5
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Singularly perturbed problems with characteristic layers : Supercloseness and postprocessing

Franz, Sebastian 14 July 2008 (has links)
In this thesis singularly perturbed convection-diffusion equations in the unit square are considered. Due to the presence of a small perturbation parameter the solutions of those problems exhibit an exponential layer near the outflow boundary and two parabolic layers near the characteristic boundaries. Discretisation of such problems on standard meshes and with standard methods leads to numerical solutions with unphysical oscillations, unless the mesh size is of order of the perturbation parameter which is impracticable. Instead we aim at uniformly convergent methods using layer-adapted meshes combined with standard methods. The meshes considered here are S-type meshes--generalisations of the standard Shishkin mesh. The domain is dissected in a non-layer part and layer parts. Inside the layer parts, the mesh might be anisotropic and non-uniform, depending on a mesh-generating function. We show, that the unstabilised Galerkin finite element method with bilinear elements on an S-type mesh is uniformly convergent in the energy norm of order (almost) one. Moreover, the numerical solution shows a supercloseness property, i.e. the numerical solution is closer to the nodal bilinear interpolant than to the exact solution in the given norm. Unfortunately, the Galerkin method lacks stability resulting in linear systems that are hard to solve. To overcome this drawback, stabilisation methods are used. We analyse different stabilisation techniques with respect to the supercloseness property. For the residual-based methods Streamline Diffusion FEM and Galerkin Least Squares FEM, the choice of parameters is addressed additionally. The modern stabilisation technique Continuous Interior Penalty FEM--penalisation of jumps of derivatives--is considered too. All those methods are proved to possess convergence and supercloseness properties similar to the standard Galerkin FEM. With a suitable postprocessing operator, the supercloseness property can be used to enhance the accuracy of the numerical solution and superconvergence of order (almost) two can be proved. We compare different postprocessing methods and prove superconvergence of above numerical methods on S-type meshes. To recover the exact solution, we apply continuous biquadratic interpolation on a macro mesh, a discontinuous biquadratic projection on a macro mesh and two methods to recover the gradient of the exact solution. Special attentions is payed to the effects of non-uniformity due to the S-type meshes. Numerical simulations illustrate the theoretical results.
12

On-line visualization in parallel computations

Pester, M. 30 October 1998 (has links) (PDF)
The investigation of new parallel algorithms for MIMD computers requires some postprocessing facilities for quickly evaluating the behavior of those algorithms We present two kinds of visualization tool implementations for 2D and 3D finite element applications to be used on a parallel computer and a host workstation.
13

Erhöhung der Qualität und Verfügbarkeit von satellitengestützter Referenzsensorik durch Smoothing im Postprocessing

Bauer, Stefan 02 February 2013 (has links) (PDF)
In dieser Arbeit werden Postprocessing-Verfahren zum Steigern der Genauigkeit und Verfügbarkeit satellitengestützer Positionierungsverfahren, die ohne Inertialsensorik auskommen, untersucht. Ziel ist es, auch unter schwierigen Empfangsbedingungen, wie sie in urbanen Gebieten herrschen, eine Trajektorie zu erzeugen, deren Genauigkeit sie als Referenz für andere Verfahren qualifiziert. Zwei Ansätze werdenverfolgt: Die Verwendung von IGS-Daten sowie das Smoothing unter Einbeziehung von Sensoren aus der Fahrzeugodometrie. Es wird gezeigt, dass durch die Verwendung von IGS-Daten eine Verringerung des Fehlers um 50% bis 70% erreicht werden kann. Weiterhin demonstrierten die Smoothing-Verfahren, dass sie in der Lage sind, auch unter schlechten Empfangsbedingungen immer eine Genauigkeit im Dezimeterbereich zu erzielen.
14

On a Family of Variational Time Discretization Methods

Becher, Simon 09 September 2022 (has links)
We consider a family of variational time discretizations that generalizes discontinuous Galerkin (dG) and continuous Galerkin-Petrov (cGP) methods. In addition to variational conditions the methods also contain collocation conditions in the time mesh points. The single family members are characterized by two parameters that represent the local polynomial ansatz order and the number of non-variational conditions, which is also related to the global temporal regularity of the numerical solution. Moreover, with respect to Dahlquist’s stability problem the variational time discretization (VTD) methods either share their stability properties with the dG or the cGP method and, hence, are at least A-stable. With this thesis, we present the first comprehensive theoretical study of the family of VTD methods in the context of non-stiff and stiff initial value problems as well as, in combination with a finite element method for spatial approximation, in the context of parabolic problems. Here, we mainly focus on the error analysis for the discretizations. More concrete, for initial value problems the pointwise error is bounded, while for parabolic problems we rather derive error estimates in various typical integral-based (semi-)norms. Furthermore, we show superconvergence results in the time mesh points. In addition, some important concepts and key properties of the VTD methods are discussed and often exploited in the error analysis. These include, in particular, the associated quadrature formulas, a beneficial postprocessing, the idea of cascadic interpolation, connections between the different VTD schemes, and connections to other classes of methods (collocation methods, Runge-Kutta-like methods). Numerical experiments for simple academic test examples are used to highlight various properties of the methods and to verify the optimality of the proven convergence orders.:List of Symbols and Abbreviations Introduction I Variational Time Discretization Methods for Initial Value Problems 1 Formulation, Analysis for Non-Stiff Systems, and Further Properties 1.1 Formulation of the methods 1.1.1 Global formulation 1.1.2 Another formulation 1.2 Existence, uniqueness, and error estimates 1.2.1 Unique solvability 1.2.2 Pointwise error estimates 1.2.3 Superconvergence in time mesh points 1.2.4 Numerical results 1.3 Associated quadrature formulas and their advantages 1.3.1 Special quadrature formulas 1.3.2 Postprocessing 1.3.3 Connections to collocation methods 1.3.4 Shortcut to error estimates 1.3.5 Numerical results 1.4 Results for affine linear problems 1.4.1 A slight modification of the method 1.4.2 Postprocessing for the modified method 1.4.3 Interpolation cascade 1.4.4 Derivatives of solutions 1.4.5 Numerical results 2 Error Analysis for Stiff Systems 2.1 Runge-Kutta-like discretization framework 2.1.1 Connection between collocation and Runge-Kutta methods and its extension 2.1.2 A Runge-Kutta-like scheme 2.1.3 Existence and uniqueness 2.1.4 Stability properties 2.2 VTD methods as Runge-Kutta-like discretizations 2.2.1 Block structure of A VTD 2.2.2 Eigenvalue structure of A VTD 2.2.3 Solvability and stability 2.3 (Stiff) Error analysis 2.3.1 Recursion scheme for the global error 2.3.2 Error estimates 2.3.3 Numerical results II Variational Time Discretization Methods for Parabolic Problems 3 Introduction to Parabolic Problems 3.1 Regularity of solutions 3.2 Semi-discretization in space 3.2.1 Reformulation as ode system 3.2.2 Differentiability with respect to time 3.2.3 Error estimates for the semi-discrete approximation 3.3 Full discretization in space and time 3.3.1 Formulation of the methods 3.3.2 Reformulation and solvability 4 Error Analysis for VTD Methods 4.1 Error estimates for the l th derivative 4.1.1 Projection operators 4.1.2 Global L2-error in the H-norm 4.1.3 Global L2-error in the V-norm 4.1.4 Global (locally weighted) L2-error of the time derivative in the H-norm 4.1.5 Pointwise error in the H-norm 4.1.6 Supercloseness and its consequences 4.2 Error estimates in the time (mesh) points 4.2.1 Exploiting the collocation conditions 4.2.2 What about superconvergence!? 4.2.3 Satisfactory order convergence avoiding superconvergence 4.3 Final error estimate 4.4 Numerical results Summary and Outlook Appendix A Miscellaneous Results A.1 Discrete Gronwall inequality A.2 Something about Jacobi-polynomials B Abstract Projection Operators for Banach Space-Valued Functions B.1 Abstract definition and commutation properties B.2 Projection error estimates B.3 Literature references on basics of Banach space-valued functions C Operators for Interpolation and Projection in Time C.1 Interpolation operators C.2 Projection operators C.3 Some commutation properties C.4 Some stability results D Norm Equivalences for Hilbert Space-Valued Polynomials D.1 Norm equivalence used for the cGP-like case D.2 Norm equivalence used for final error estimate Bibliography
15

Interestingness Measures for Association Rules in a KDD Process : PostProcessing of Rules with ARQAT Tool

Huynh, Xuan-Hiep 07 December 2006 (has links) (PDF)
This work takes place in the framework of Knowledge Discovery in Databases (KDD), often called "Data Mining". This domain is both a main research topic and an application ¯eld in companies. KDD aims at discovering previously unknown and useful knowledge in large databases. In the last decade many researches have been published about association rules, which are frequently used in data mining. Association rules, which are implicative tendencies in data, have the advantage to be an unsupervised model. But, in counter part, they often deliver a large number of rules. As a consequence, a postprocessing task is required by the user to help him understand the results. One way to reduce the number of rules - to validate or to select the most interesting ones - is to use interestingness measures adapted to both his/her goals and the dataset studied. Selecting the right interestingness measures is an open problem in KDD. A lot of measures have been proposed to extract the knowledge from large databases and many authors have introduced the interestingness properties for selecting a suitable measure for a given application. Some measures are adequate for some applications but the others are not. In our thesis, we propose to study the set of interestingness measure available in the literature, in order to evaluate their behavior according to the nature of data and the preferences of the user. The ¯nal objective is to guide the user's choice towards the measures best adapted to its needs and in ¯ne to select the most interesting rules. For this purpose, we propose a new approach implemented in a new tool, ARQAT (Association Rule Quality Analysis Tool), in order to facilitate the analysis of the behavior about 40 interest- ingness measures. In addition to elementary statistics, the tool allows a thorough analysis of the correlations between measures using correlation graphs based on the coe±cients suggested by Pear- son, Spearman and Kendall. These graphs are also used to identify the clusters of similar measures. Moreover, we proposed a series of comparative studies on the correlations between interestingness measures on several datasets. We discovered a set of correlations not very sensitive to the nature of the data used, and which we called stable correlations. Finally, 14 graphical and complementary views structured on 5 levels of analysis: ruleset anal- ysis, correlation and clustering analysis, most interesting rules analysis, sensitivity analysis, and comparative analysis are illustrated in order to show the interest of both the exploratory approach and the use of complementary views.
16

On-line visualization in parallel computations

Pester, M. 30 October 1998 (has links)
The investigation of new parallel algorithms for MIMD computers requires some postprocessing facilities for quickly evaluating the behavior of those algorithms We present two kinds of visualization tool implementations for 2D and 3D finite element applications to be used on a parallel computer and a host workstation.
17

Strength proof according to the FKM-Guideline within Creo Simulate

Kölbl, Markus 02 July 2018 (has links)
◦ The strength verification according to FKM with FEM results is time-consuming and requires a separate tool, e.g. KISSsoft. ◦ The strength verification was so far only carried out at individual points of the model. The selection of verification points is usually based on the equivalent stresses. The following influences can not or not sufficiently be considered. ▪ Locally different limit value of strain or plastic notch factor ▪ The location of the most critical combination of stress amplitude and mean stress ▪ The local stress gradient ◦ femMeshFKM was developed from ZF for railway applications (IX), where FEM calculations are performed with Permas. Postprocessing was done in Hyperworks. ◦ For ZF Test Systems, femMeshFKM has been extended to use Creo Simulate data. The postprocessing can be done also in Simulate.
18

Déconvolution aveugle parcimonieuse en imagerie échographique avec un algorithme CLEAN adaptatif / Sparse blind deconvolution in ultrasound imaging using an adaptative CLEAN algorithm

Chira, Liviu-Teodor 17 October 2013 (has links)
L'imagerie médicale ultrasonore est une modalité en perpétuelle évolution et notamment en post-traitement où il s'agit d'améliorer la résolution et le contraste des images. Ces améliorations devraient alors aider le médecin à mieux distinguer les tissus examinés améliorant ainsi le diagnostic médical. Il existe déjà une large palette de techniques "hardware" et "software". Dans ce travail nous nous sommes focalisés sur la mise en oeuvre de techniques dites de "déconvolution aveugle", ces techniques temporelles utilisant l'enveloppe du signal comme information de base. Elles sont capables de reconstruire des images parcimonieuses, c'est-à-dire des images de diffuseurs dépourvues de bruit spéculaire. Les principales étapes de ce type de méthodes consistent en i) l'estimation aveugle de la fonction d'étalement du point (PSF), ii) l'estimation des diffuseurs en supposant l'environnement exploré parcimonieux et iii) la reconstruction d'images par reconvolution avec une PSF "idéale". La méthode proposée a été comparée avec des techniques faisant référence dans le domaine de l'imagerie médicale en utilisant des signaux synthétiques, des séquences ultrasonores réelles (1D) et images ultrasonores (2D) ayant des statistiques différentes. La méthode, qui offre un temps d'exécution très réduit par rapport aux techniques concurrentes, est adaptée pour les images présentant une quantité réduite ou moyenne des diffuseurs. / The ultrasonic imaging knows a continuous advance in the aspect of increasing the resolution for helping physicians to better observe and distinguish the examined tissues. There is already a large range of techniques to get the best results. It can be found also hardware or signal processing techniques. This work was focused on the post-processing techniques of blind deconvolution in ultrasound imaging and it was implemented an algorithm that works in the time domain and uses the envelope signal as input information for it. It is a blind deconvolution technique that is able to reconstruct reflectors and eliminate the diffusive speckle noise. The main steps are: the estimation of the point spread function (PSF) in a blind way, the estimation of reflectors using the assumption of sparsity for the examined environment and the reconstruction of the image by reconvolving the sparse tissue with an ideal PSF. The proposed method was tested in comparison with some classical techniques in medical imaging reconstruction using synthetic signals, real ultrasound sequences (1D) and ultrasound images (2D) and also using two types of statistically different images. The method is suitable for images that represent tissue with a reduced amount or average scatters. Also, the technique offers a lower execution time than direct competitors.
19

Erhöhung der Qualität und Verfügbarkeit von satellitengestützter Referenzsensorik durch Smoothing im Postprocessing

Bauer, Stefan 08 November 2012 (has links)
In dieser Arbeit werden Postprocessing-Verfahren zum Steigern der Genauigkeit und Verfügbarkeit satellitengestützer Positionierungsverfahren, die ohne Inertialsensorik auskommen, untersucht. Ziel ist es, auch unter schwierigen Empfangsbedingungen, wie sie in urbanen Gebieten herrschen, eine Trajektorie zu erzeugen, deren Genauigkeit sie als Referenz für andere Verfahren qualifiziert. Zwei Ansätze werdenverfolgt: Die Verwendung von IGS-Daten sowie das Smoothing unter Einbeziehung von Sensoren aus der Fahrzeugodometrie. Es wird gezeigt, dass durch die Verwendung von IGS-Daten eine Verringerung des Fehlers um 50% bis 70% erreicht werden kann. Weiterhin demonstrierten die Smoothing-Verfahren, dass sie in der Lage sind, auch unter schlechten Empfangsbedingungen immer eine Genauigkeit im Dezimeterbereich zu erzielen.
20

Αναγνώριση δικτύου αγγείων στο υπέρυθρο φάσμα

Βλάχος, Μάριος 13 July 2010 (has links)
Η κατασκευή συστημάτων τομογραφίας του ανθρώπινου ιστού τα οποία θα χρησιμοποιούν το υπέρυθρο φάσμα ακτινοβολίας αποτελεί σημαντική προοπτική για τη δημιουργία νέων ιατρικών διαγνωστικών μεθόδων. Ένα από τα σημαντικότερα προβλήματα που πρέπει να επιλυθούν είναι η μικρή διεισδυτική ικανότητα και ο υψηλός βαθμός απορρόφησης και σκέδασης που παραμορφώνει ισχυρά την ακτινοβολία που διαδίδεται μέσα από τον ανθρώπινο ιστό. Στα πλαίσια της διδακτορικής διατριβής, μελετήθηκε το πρόβλημα του εντοπισμού της θέσης των αγγείων σε ψηφιακές φωτογραφίες του ανθρώπινου δακτύλου που έχουν ληφθεί στο υπέρυθρο φάσμα. Για τον σκοπό αυτό αναπτύχθηκε μεγάλος αριθμός πρωτότυπων μεθόδων κανονικοποίησης της φωτεινότητας της εικόνας, μη-γραμμικής ενίσχυσης της αντίθεσης, αφαίρεσης των γραμμών δακτυλικών αποτυπωμάτων, εντοπισμού του προτύπου ή δικτύου αγγείων και βελτίωσης του προτύπου των αγγείων χρησιμοποιώντας μεθόδους μαθηματικής μορφολογίας. Συνοπτικά στην παρούσα διδακτορική διατριβή προτάθηκαν και εξετάσθηκαν διαφορετικές πρωτότυπες μέθοδοι και αλγόριθμοι με επίβλεψη ή χωρίς επίβλεψη για την εξαγωγή του προτύπου αγγείων από υπέρυθρες εικόνες του ανθρώπινου δακτύλου καθώς και διαφορετικές πρωτότυπες μέθοδοι και αλγόριθμοι χωρίς επίβλεψη για την εξαγωγή του δικτύου αγγείων από αμφιβληστροειδικές εικόνες του ανθρώπινου οφθαλμού. Επίσης, η ερευνητική προσπάθεια επικεντρώθηκε στην βελτίωση των εικόνων που λαμβάνονται από το προτεινόμενο σύστημα απόκτησης εικόνων, γεγονός το οποίο οδήγησε στην ανάπτυξη πρωτότυπων μεθόδων προ-επεξεργασίας και τη μετέπειτα βελτίωση των αρχικών αποτελεσμάτων κατάτμησης που προκύπτουν από την εφαρμογή των μεθόδων ή αλγορίθμων κατάτμησης προτύπου αγγείων, γεγονός το οποίο οδήγησε στην ανάπτυξη πρωτότυπων μεθόδων μετά-επεξεργασίας. / The construction of tomographic systems of human tissue which use the infrared spectrum of radiation constitutes an important capability of making new medical diagnostic methods. One of the most crucial problems which must be resolved is the low penetrating ability and the high degree of absorption and scattering which strongly distort the radiation that pass through the human tissue. In this thesis, the problem of the extraction of finger vein pattern from infrared images of finger and the similar problem of retinal vessel tree segmentation were studied. Moreover, the problem of shading and non-uniform illumination correction was also studied in images which suffer from the above problems either due to imperfect set-up of the image acquisition system or due to the interaction between objects and illumination on the scene. In this thesis, existing algorithms were improved and novel algorithms were developed. Both vein pattern extraction algorithms and shading and non-uniform illumination correction algorithms were proposed. The proposed methods include novel preprocessing modules for intensity normalization, elimination of fingerprint lines, non linear contrast enhancement using spatial information, and shading and non uniform illumination correction. The vein pattern extraction was performed using ten novel methods that use structural classification methods, spatial derivatives information and fuzzy set theory. The effectiveness of the proposed methods and algorithms was evaluated both on real and artificial images distorted by different types of noise and different signal to noise ratios. The majority of the methods present satisfactory accuracy on the detection of vein network, something happens due to the successful collaboration between the preprocessing methods and the vein pattern extraction methods. In addition, the problem of improving the vein network extraction accuracy was successfully handled using advanced postprocessing methods based on binary mathematical morphology. Finally, in this thesis two novel methods for retinal vessel segmentation were proposed and evaluated. They also compared with the most important methods have already been presented in the literature and one of them achieved the best experimental results from all the unsupervised methods evaluated in the publicly available DRIVE database.

Page generated in 0.1045 seconds