261 |
Deductive Module Extraction for Expressive Description Logics: Extended VersionKoopmann, Patrick, Chen, Jieying 20 June 2022 (has links)
In deductive module extraction, we determine a small subset of an ontology for a given vocabulary that preserves all logical entailments that can be expressed in that vocabulary. While in the literature stronger module notions have been discussed, we argue that for applications in ontology analysis and ontology reuse, deductive modules, which are decidable and potentially smaller, are often sufficient. We present methods based on uniform interpolation for extracting different variants of deductive modules, satisfying properties such as completeness, minimality and robustness under replacements, the latter being particularly relevant for ontology reuse. An evaluation of our implementation shows that the modules computed by our method are often significantly smaller than those computed by existing methods. / This is an extended version of the article in the proceedings of IJCAI 2020.
|
262 |
Ekolodsmätningars förhållning mot olika insamlings- och interpolationsmetoder : En fallstudie på sjön Öjaren, SandvikenKarlsson, Erik, Sjöström, Benjamin January 2020 (has links)
Traditionellt har större fartyg bestyckade med ekolod använts för att utföra batymetriska mätningar av sjö- och havsbottnar. Att utföra mätningar i grunda vatten har varit problematiskt eftersom större fartyg inte kan nå dessa grunda vatten. För att tackla det problemet har mindre obemannade ytfarkoster (USV) utvecklats för att mäta grunda vatten. Dessa USVs hjälper även till vid områden nära stenar som inte har fått uppdaterade djupvärden. Den här undersökningens syfte är att utvärdera hur en Seafloor HydroLite TM enkelstrålsekolod monterad på en USV skiljer sig från insamlingsmetoderna GNSS och med måttband. Den syftar även till att utvärdera vilken interpolationsteknik som är mest lämpad för skapande av djupmodeller med enkelstrålsekolodsdata. Det kommer också studeras hur tvärsektioner påverkar djupmodellerna skapade med enkelstrålsekolod. De experimentella mätningarna med GNSS, måttband och enkelstrålsekolod utfördes i sjön Öjaren som ligger utanför Sandviken. I undersökningen inmättes totalt 91 punkter med GNSS och måttband samt 8 mätstråk och 9 tvärsektioner med enkelstrålsekolod monterad på en USV. Djupmodellerna skapades i Surfer 10 med interpolationsteknikerna kriging, natural neighbor och triangulation with linear interpolation. Alla beräkningar genomfördes i Microsoft Excel och data insamlat med måttband ansågs vara det sanna värdet vid jämförelsen mellan insamlingsmetoderna. Resultaten visade att djupmodellerna skapade med GNSS-data är snarlika till djupmodellerna skapade med måttbandsdata samt att djupmodellerna med GNSS-data visar på den minsta skillnaden mot djupmodellerna skapade med enkelstrålsekolodsdata. Resultatet från jämförelsen mellan interpolationsteknikerna visar på att användandet av de olika interpolationsteknikerna inte har en signifikant påverkan på djupmodellen. Våra slutsatser av undersökningen blev att användande av ett enkelstrålsekolod kan bidra till att skapa en mer detaljerad djupmodell än om enbart GNSS eller måttbandsdata används. Det är också en mer kostnadseffektiv metod eftersom mer data kan samlas in på kortare tid. Det kan dock uppstå felmätningar vid insamlade av data med enkelstrålsekolod som kan vara svåra att upptäcka. Tilläggande av tvärsektioner kan bidra till att skapa en ännu mer detaljerad djupmodell och kan användas som kontrollpunkter vid kontroll av enkelstrålsekolodsdata. / Traditionally, large vessels armed with echo sounders have been used to conduct bathymetric surveys of the seas and oceans. Conducting surveys of shallow water have been troublesome since larger vessels cannot reach and survey shallow waters. To tackle that problem smaller unmanned surface vessels (USV) have been developed to survey shallow waters. It also helps in the areas closest to rocks that do not have updated depth measurements. This study aims to assess how a Seafloor HydroLite TM single-beam echo sounder mounted on a USV differs from other surveying methods. It also aims to evaluate which interpolation methods is most suitable for creating depth models by utilizing single-beam echo sounder data. It will also be studied how cross section lines affect the created depth using the USV. The experimental surveys with GNSS, measuring tape and single-beam echo sounder were used in the lake Öjaren that is located outside of Sandviken. In this study a total of 91 points were collected with GNSS and measuring tape and 8 sounding lines and 9 cross sections lines were collected using echo sounder mounted on the USV. The depth models were created in Surfer 10 using different interpolation methods i.e. kriging, natural neighbor and triangulation with linear interpolation. All calculation were performed in Microsoft Excel and the measurements collected with measuring tape were assumed as a “true” value to evaluate the different surveying techniques. The results showed that the depth model obtained using GNSS data is close to the depth model created using measuring tape data and shows lowest difference in comparison to the USV technique. The results from the comparison between interpolation methods showed that the use of different interpolation methods not have a significant impact on the depth model. The study concludes that the use of a single-beam echo sounder can help to create a more detailed depth model than using GNSS or measuring tape. It is also a cost effective method that helps collect more data in a short time. Though, some errors can appear in the data collected using the single-beam echo sounder that can be hard to detect. The cross section lines can contribute to a more detailed depth model and can be used as control points.
|
263 |
Spatial Distribution of Arsenic Contamination in the Bedrock of Arlanda-Rosersberg Area / Rumslig distribution av arsenik i berggrunden vid Arlanda-Rosersbergs områdetNilsson, Cornelia January 2024 (has links)
Arsenic, which is a naturally occurring element in the severe health issues during exposure. Both the World Health Organisation and The Swedish Environmental Protection Agency (Naturvårdsverket) state that >10 ppm arsenic in bedrock is already concerning. During a construction of a new highway junction in the Arlanda-Rosersberg area soil samples revealed elevated levels of arsenic. Since it poses a risk to public health and further spreading without knowledge about the origin and extent of the contamination, the Swedish Geological Survey undertook an investigation to address this issue which included geological mapping as well as geochemical analysis of the whole rock samples. This thesis has used the collected geochemical data to visualise the spatial distribution of arsenic over the project area to examine if there is a correlation between elevated concentrations and specific lithologies, or if other processes such as pegmatite formation could be responsible enrichment. Kernel interpolation with ArcGIS Pro was used to create three maps with unique parameters: without geological barriers, with geological barriers including samples taken from nearby pegmatites, and with geological barriers excluding samples taken from nearby pegmatites. The resulting maps indicate that a majority of the project area far exceeds the recommended limit, regardless of the parameters. Based on the data it is currently impossible to distinctly determine the origin of the arsenic contamination, since the results suggest a role of both lithological correlation and process-related source, and a combination of both cannot be ruled out. Therefore, to improve the resolution and knowledge about the area, more extensive examination and gathering of data would be beneficial, as well as implementing improvements of the method to better simulate the mobilisation of arsenic. / Arsenik är ett naturligt förekommande grundämne i berggrunden som är känt för att orsaka allvarliga problem för människors hälsa vid exponering. Både World Health Organisation och Naturvårdsverket uppger att >10 ppm arsenik i berggrunden är oroväckande. Vid byggandet av en motorvägsavfart i Arlanda-Rosersbergs området visade jordprover höga koncentrationer av arsenik. Eftersom det utgör en risk för människors hälsa, samt fortsatt spridning utan kunskap om ursprung och utsträckning av kontaminationen, utförde Sveriges Geologiska Undersökning (SGU) ett projekt vilket inkluderade geologisk kartläggning samt geokemiska analyser. Det här självständiga arbetet har använt det geokemiska data för att visualisera den rumsliga spridningen av arsenik över projektområdet för att undersöka om det finns ett samband mellan höga koncentrationer och geologiska enheter, eller om magmatiska processer som bildandet av pegmatit kan vara källan till arsenik. Kernel interpolation med ArcGIS Pro användes för att skapa tre kartor med unika parametrar: utan geologiska barriärer, med geologiska barriärer inklusive prover tagna i närheten av pegmatiter, med geologiska barriärer exklusive prover tagna i närheten av pegmatiter. De resulterande kartorna indikerar att en majoritet av projektområdet överskrider den rekommenderade gränsen, oavsett parametrar. Baserat på data är det vid nuläget omöjligt att avgöra ursprunget av arsenik, eftersom resultatet indikerar både geologiskt samband och process relaterad källa, därför kan en kombination av dem inte uteslutas. För att förbättra kunskapen om området krävs mer omfattande undersökning och data, samt förbättringar av metoden för att bättre simulera mobiliseringen av arsenik.
|
264 |
Seasonal Analysis of PFAS Spreading at Örnsköldsvik Airport Using Geostatistical Tools in GIS / Säsongsanalys av PFAS-spridning på Örnsköldsviks flygplats med hjälp av geostatistiska verktyg i GISHu, Anna January 2024 (has links)
The collaborative thesis with the Swedish Geotechnical Institute (SGI) investigates seasonal PFAS contamination spreading near Örnsköldsvik airport, where AFFF was used during fire drills from 1993 to 2008. Data analysis of 11 PFAS concentrations from November 2022 to October 2023 utilized scatterplots and time series graphs. In ArcMap, concentrations were interpolated to predict spatial spreading across four seasons. Geologically, the area features an upper aquifer confined by a clay layer, potentially with a lower aquifer. High PFAS concentrations in the upper aquifer occur during spring and autumn, except for PFHxA and PFBA, peaking in winter. Contamination primarily extends towards Varmyrän, with unexpected spread towards Bromyrän and the northeast area to the fire drill site. Significant spatial changes occur from winter to spring, and certain short-chain PFAS behave similarly in time and space. Limited data for the lower aquifer allowed analysis only during summer and autumn, revealing major spatial changes concentrated in the north of the fire drill area. In both layers, PFOS, PFHxS, and PFHxA present the highest concentration. Various factors influence PFAS movement, including groundwater levels, soil types, and precipitation, with seasonal groundwater data lacking. PFOS and PFHxS move towards Varmyrän within approximately 30 days, with additional movement towards the airport. Percolation from the upper to the lower aquifer could take around six months, with data limitations for winter and spring in the lower aquifer. Kriging is sensitive to point quantity, and normalization complicates absolute value comparisons from interpolations. This study highlights the need for comprehensive data collection, alternative methodologies for studying PFAS seasonal changes, and improved data visualization techniques to enhance understanding of PFAS dynamics in the groundwater. / Samarbetet med Statens geotekniska institut (SGI) undersöker säsongsbunden PFAS-föroreningsspridning nära Örnsköldsviks flygplats, där AFFF användes under brandövningar från 1993 till 2008. Dataanalys av 11 PFAS-koncentrationer från november 2022 till oktober 2023 använde scattergraphs och tidsseriediagram. . I ArcMap interpolerades koncentrationer för att förutsäga rumslig spridning över fyra årstider. Geologiskt har området en övre akvifär som är begränsad av ett lerlager, potentiellt med en lägre akvifär. Höga PFAS-koncentrationer i den övre akvifären förekommer under våren och hösten, förutom PFHxA och PFBA, som toppar på vintern. Föroreningarna sträcker sig i första hand mot Varmyrän, med oväntad spridning mot Bromyrän och det nordöstra området till brandövningsplatsen. Betydande rumsliga förändringar sker från vinter till vår, och vissa kortkedjiga PFAS uppträder på liknande sätt i tid och rum. Begränsade data för den nedre akvifären möjliggjorde analys endast under sommaren och hösten, vilket avslöjade stora rumsliga förändringar koncentrerade till norra delen av brandövningsområdet. I båda skikten har PFOS, PFHxS och PFHxA den högsta koncentrationen. Olika faktorer påverkar PFAS-rörelsen, inklusive grundvattennivåer, jordtyper och nederbörd, med säsongsbetonade grundvattendata saknas. PFOS och PFHxS rör sig mot Varmyrän inom cirka 30 dagar, med ytterligare förflyttning mot flygplatsen. Perkolering från den övre till den nedre akvifären kan ta cirka sex månader, med databegränsningar för vinter och vår i den nedre akvifären. Kriging är känsligt för punktkvantitet, och normalisering komplicerar absolutvärdesjämförelser från interpolationer. Denna studie belyser behovet av omfattande datainsamling, alternativa metoder för att studera PFAS säsongsförändringar och förbättrade datavisualiseringstekniker för att öka förståelsen för PFAS-dynamiken i grundvattnet.
|
265 |
On Visualizing Branched Surface: an Angle/Area Preserving ApproachZhu, Lei 12 September 2004 (has links)
The techniques of surface deformation and mapping are useful tools for the visualization of medical surfaces, especially for highly undulated or branched surfaces. In this thesis, two algorithms
are presented for flattened visualizations of multi-branched medical surfaces, such as vessels. The first algorithm is an angle preserving approach, which is based on conformal analysis. The mapping function is obtained by minimizing two Dirichlet functionals. On a triangulated representation of vessel surfaces, this algorithm can be implemented efficiently using a finite
element method. The second algorithm adjusts the result from conformal mapping to produce a flattened representation of the original surface while preserving areas. It employs the theory of
optimal mass transport via a gradient descent approach.
A new class of image morphing algorithms is also considered based on the theory of optimal mass transport. The mass moving energy functional is revised by adding an intensity penalizing term, in
order to reduce the undesired "fading" effects. It is a parameter free approach. This technique has been applied on several natural and medical images to generate in-between image sequences.
|
266 |
Interpolation des données en imagerie cardiaque par résonance magnétique du tenseur de diffusionYang, Feng 15 January 2011 (has links) (PDF)
L'un des problèmes fondamentaux de l'imagerie cardiaque par résonance magnétique du tenseur de diffusion (IRM-TD) est sa faible résolution spatiale, à cause des limitations matérielles des scanners IRM actuels. L'objectif principal de ce travail de thèse est de développer de nouvelles approches pour améliorer la résolution des données d'IRM-TD afin de mieux représenter l'architecture myocardique du coeur humain et de la comparer avec des résultats issus d'autres techniques d'investigation telles que l'imagerie par lumière polarisée. Dans ce cadre, le travail porte sur trois parties principales. La première concerne le développement d'une nouvelle approche pour l'interpolation des champs de vecteurs propres principaux issus de l'IRM-TD cardiaque humaine. Cette approche consiste d'abord à supprimer les vecteurs corrompus par le bruit au lieu de débruiter de manière uniforme le champ entier de vecteurs, et ensuite à interpoler le champ de vecteurs en utilisant la modèle Thin-Plate-Spline (TPS) afin d'exploiter la corrélation entre les composantes du vecteur. La deuxième partie concerne une nouvelle famille de méthodes d'interpolation pour les champs de tenseurs, basée soit sur les angles d'Euler soit sur le quaternion. Ces méthodes exploitent les caractéristiques du tenseur et préservent les paramètres de tenseurs, tels que le déterminant du tenseur, l'anisotropie fractionnelle (FA) et la diffusivité moyenne (MD). En outre, cette partie compare les principales approches d'interpolation au niveau des images pondérées en diffusion et des champs de tenseurs, et les résultats montrent qu'il serait préférable d'effectuer l'interpolation des données d'IRM-TD au niveau des champs de tenseurs. La troisième partie étudie le changement des paramètres MD et FA après un infarctus du myocarde chez les cochons, et l'influence des méthodes d'interpolation sur ces paramètres dans la zone infarctus et la zone distante. Les résultats montrent que la zone infarctus présente une diminution significative de FA et une augmentation significative de MD, comparée avec la zone distante, et que les méthodes d'interpolations du tenseur ont plus d'influence sur FA que sur MD, ce qui suggère que l'interprétation de ces paramètres cliniques après l'interpolation doive être prise avec précaution.
|
267 |
Interpolation des données en imagerie cardiaque par résonance magnétique du tenseur de diffusion / Interpolation of data in cardiac DT-MRIYang, Feng 15 January 2011 (has links)
L'un des problèmes fondamentaux de l'imagerie cardiaque par résonance magnétique du tenseur de diffusion (IRM-TD) est sa faible résolution spatiale, à cause des limitations matérielles des scanners IRM actuels. L'objectif principal de ce travail de thèse est de développer de nouvelles approches pour améliorer la résolution des données d'IRM-TD afin de mieux représenter l'architecture myocardique du coeur humain et de la comparer avec des résultats issus d'autres techniques d'investigation telles que l'imagerie par lumière polarisée. Dans ce cadre, le travail porte sur trois parties principales. La première concerne le développement d'une nouvelle approche pour l'interpolation des champs de vecteurs propres principaux issus de l'IRM-TD cardiaque humaine. Cette approche consiste d'abord à supprimer les vecteurs corrompus par le bruit au lieu de débruiter de manière uniforme le champ entier de vecteurs, et ensuite à interpoler le champ de vecteurs en utilisant la modèle Thin-Plate-Spline (TPS) afin d'exploiter la corrélation entre les composantes du vecteur. La deuxième partie concerne une nouvelle famille de méthodes d'interpolation pour les champs de tenseurs, basée soit sur les angles d'Euler soit sur le quaternion. Ces méthodes exploitent les caractéristiques du tenseur et préservent les paramètres de tenseurs, tels que le déterminant du tenseur, l'anisotropie fractionnelle (FA) et la diffusivité moyenne (MD). En outre, cette partie compare les principales approches d'interpolation au niveau des images pondérées en diffusion et des champs de tenseurs, et les résultats montrent qu'il serait préférable d'effectuer l'interpolation des données d'IRM-TD au niveau des champs de tenseurs. La troisième partie étudie le changement des paramètres MD et FA après un infarctus du myocarde chez les cochons, et l'influence des méthodes d'interpolation sur ces paramètres dans la zone infarctus et la zone distante. Les résultats montrent que la zone infarctus présente une diminution significative de FA et une augmentation significative de MD, comparée avec la zone distante, et que les méthodes d'interpolations du tenseur ont plus d'influence sur FA que sur MD, ce qui suggère que l'interprétation de ces paramètres cliniques après l'interpolation doive être prise avec précaution. / One of fundamental problems in human cardiac diffusion tensor magnetic resonance imaging (DT-MRI) is its poor spatial resolution, due to the hardware limitations of MRI scanners. The main purpose of this PhD work is to develop new approaches to improving the resolution of cardiac DT-MRI data in order to better understand the myocardial architecture of the heart and compare it with results issues from other investigation techniques such as polarized light imaging. Within this framework, the present work is composed of three main parts. The first part concerns a new approach to interpolating primary eigenvector fields from human cardiac DT-MRI using Thin Plate Spline (TPS) model. This approach removes the noise-corrupted vectors rather than denoising the whole vector field in a uniform manner, and uses TPS model in order to exploit the correlation between vector components during interpolation. The second part is dealt with a new category of feature-based methods for diffusion tensor field interpolation using either Euler angles or quaternion. These feature-based methods well preserve tensor parameters, such as tensor determinant, fractional anisotropy (FA) and mean diffusivity (MD) values. In this part are also compared the main interpolation approaches at the level of diffusion weighted images and tensor fields. The results show that the interpolation of DT-MRI data should be performed at the level of tensor fields. The last part investigates changes in MD and FA after myocardial infarction in porcine hearts, and the influence of diffusion tensor interpolation methods on FA and MD in both infarction and remote region. It is found that the infarction region showed significantly decreased FA and increased MD than the remote region, and that diffusion tensor interpolations have more significant influence on FA than on MD, which suggests that precaution should be taken when performing the clinical analysis based on the parameters after diffusion tensor interpolations.
|
268 |
Error analysis of the Galerkin FEM in L 2 -based norms for problems with layers / Fehleranalysis der Galerkin FEM in L2-basierten Normen für Probleme mit GrenzschichtenSchopf, Martin 20 May 2014 (has links) (PDF)
In the present thesis it is shown that the most natural choice for a norm for the analysis of the Galerkin FEM, namely the energy norm, fails to capture the boundary layer functions arising in certain reaction-diffusion problems. In view of a formal Definition such reaction-diffusion problems are not singularly perturbed with respect to the energy norm. This observation raises two questions:
1. Does the Galerkin finite element method on standard meshes yield satisfactory approximations for the reaction-diffusion problem with respect to the energy norm?
2. Is it possible to strengthen the energy norm in such a way that the boundary layers are captured and that it can be reconciled with a robust finite element method, i.e.~robust with respect to this strong norm?
In Chapter 2 we answer the first question. We show that the Galerkin finite element approximation converges uniformly in the energy norm to the solution of the reaction-diffusion problem on standard shape regular meshes. These results are completely new in two dimensions and are confirmed by numerical experiments. We also study certain convection-diffusion problems with characterisitc layers in which some layers are not well represented in the energy norm.
These theoretical findings, validated by numerical experiments, have interesting implications for adaptive methods. Moreover, they lead to a re-evaluation of other results and methods in the literature.
In 2011 Lin and Stynes were the first to devise a method for a reaction-diffusion problem posed in the unit square allowing for uniform a priori error estimates in an adequate so-called balanced norm. Thus, the aforementioned second question is answered in the affirmative. Obtaining a non-standard weak formulation by testing also with derivatives of the test function is the key idea which is related to the H^1-Galerkin methods developed in the early 70s. Unfortunately, this direct approach requires excessive smoothness of the finite element space considered. Lin and Stynes circumvent this problem by rewriting their problem into a first order system and applying a mixed method. Now the norm captures the layers. Therefore, they need to be resolved by some layer-adapted mesh. Lin and Stynes obtain optimal error estimates with respect to the balanced norm on Shishkin meshes. However, their method is unable to preserve the symmetry of the problem and they rely on the Raviart-Thomas element for H^div-conformity.
In Chapter 4 of the thesis a new continuous interior penalty (CIP) method is present, embracing the approach of Lin and Stynes in the context of a broken Sobolev space. The resulting method induces a balanced norm in which uniform error estimates are proven. In contrast to the mixed method the CIP method uses standard Q_2-elements on the Shishkin meshes. Both methods feature improved stability properties in comparison with the Galerkin FEM. Nevertheless, the latter also yields approximations which can be shown to converge to the true solution in a balanced norm uniformly with respect to diffusion parameter. Again, numerical experiments are conducted that agree with the theoretical findings.
In every finite element analysis the approximation error comes into play, eventually. If one seeks to prove any of the results mentioned on an anisotropic family of Shishkin meshes, one will need to take advantage of the different element sizes close to the boundary. While these are ideally suited to reflect the solution behavior, the error analysis is more involved and depends on anisotropic interpolation error estimates.
In Chapter 3 the beautiful theory of Apel and Dobrowolski is extended in order to obtain anisotropic interpolation error estimates for macro-element interpolation. This also sheds light on fundamental construction principles for such operators. The thesis introduces a non-standard finite element space that consists of biquadratic C^1-finite elements on macro-elements over tensor product grids, which can be viewed as a rectangular version of the C^1-Powell-Sabin element. As an application of the general theory developed, several interpolation operators mapping into this FE space are analyzed. The insight gained can also be used to prove anisotropic error estimates for the interpolation operator induced by the well-known C^1-Bogner-Fox-Schmidt element. A special modification of Scott-Zhang type and a certain anisotropic interpolation operator are also discussed in detail. The results of this chapter are used to approximate the solution to a recation-diffusion-problem on a Shishkin mesh that features highly anisotropic elements. The obtained approximation features continuous normal derivatives across certain edges of the mesh, enabling the analysis of the aforementioned CIP method.
|
269 |
Error analysis of the Galerkin FEM in L 2 -based norms for problems with layers: On the importance, conception and realization of balancingSchopf, Martin 07 May 2014 (has links)
In the present thesis it is shown that the most natural choice for a norm for the analysis of the Galerkin FEM, namely the energy norm, fails to capture the boundary layer functions arising in certain reaction-diffusion problems. In view of a formal Definition such reaction-diffusion problems are not singularly perturbed with respect to the energy norm. This observation raises two questions:
1. Does the Galerkin finite element method on standard meshes yield satisfactory approximations for the reaction-diffusion problem with respect to the energy norm?
2. Is it possible to strengthen the energy norm in such a way that the boundary layers are captured and that it can be reconciled with a robust finite element method, i.e.~robust with respect to this strong norm?
In Chapter 2 we answer the first question. We show that the Galerkin finite element approximation converges uniformly in the energy norm to the solution of the reaction-diffusion problem on standard shape regular meshes. These results are completely new in two dimensions and are confirmed by numerical experiments. We also study certain convection-diffusion problems with characterisitc layers in which some layers are not well represented in the energy norm.
These theoretical findings, validated by numerical experiments, have interesting implications for adaptive methods. Moreover, they lead to a re-evaluation of other results and methods in the literature.
In 2011 Lin and Stynes were the first to devise a method for a reaction-diffusion problem posed in the unit square allowing for uniform a priori error estimates in an adequate so-called balanced norm. Thus, the aforementioned second question is answered in the affirmative. Obtaining a non-standard weak formulation by testing also with derivatives of the test function is the key idea which is related to the H^1-Galerkin methods developed in the early 70s. Unfortunately, this direct approach requires excessive smoothness of the finite element space considered. Lin and Stynes circumvent this problem by rewriting their problem into a first order system and applying a mixed method. Now the norm captures the layers. Therefore, they need to be resolved by some layer-adapted mesh. Lin and Stynes obtain optimal error estimates with respect to the balanced norm on Shishkin meshes. However, their method is unable to preserve the symmetry of the problem and they rely on the Raviart-Thomas element for H^div-conformity.
In Chapter 4 of the thesis a new continuous interior penalty (CIP) method is present, embracing the approach of Lin and Stynes in the context of a broken Sobolev space. The resulting method induces a balanced norm in which uniform error estimates are proven. In contrast to the mixed method the CIP method uses standard Q_2-elements on the Shishkin meshes. Both methods feature improved stability properties in comparison with the Galerkin FEM. Nevertheless, the latter also yields approximations which can be shown to converge to the true solution in a balanced norm uniformly with respect to diffusion parameter. Again, numerical experiments are conducted that agree with the theoretical findings.
In every finite element analysis the approximation error comes into play, eventually. If one seeks to prove any of the results mentioned on an anisotropic family of Shishkin meshes, one will need to take advantage of the different element sizes close to the boundary. While these are ideally suited to reflect the solution behavior, the error analysis is more involved and depends on anisotropic interpolation error estimates.
In Chapter 3 the beautiful theory of Apel and Dobrowolski is extended in order to obtain anisotropic interpolation error estimates for macro-element interpolation. This also sheds light on fundamental construction principles for such operators. The thesis introduces a non-standard finite element space that consists of biquadratic C^1-finite elements on macro-elements over tensor product grids, which can be viewed as a rectangular version of the C^1-Powell-Sabin element. As an application of the general theory developed, several interpolation operators mapping into this FE space are analyzed. The insight gained can also be used to prove anisotropic error estimates for the interpolation operator induced by the well-known C^1-Bogner-Fox-Schmidt element. A special modification of Scott-Zhang type and a certain anisotropic interpolation operator are also discussed in detail. The results of this chapter are used to approximate the solution to a recation-diffusion-problem on a Shishkin mesh that features highly anisotropic elements. The obtained approximation features continuous normal derivatives across certain edges of the mesh, enabling the analysis of the aforementioned CIP method.:Notation
1 Introduction
2 Galerkin FEM error estimation in weak norms
2.1 Reaction-diffusion problems
2.2 A convection-diffusion problem with weak characteristic layers and a Neumann outflow condition
2.3 A mesh that resolves only part of the exponential layer and neglects the weaker characteristic layers
2.3.1 Weakly imposed characteristic boundary conditions
2.4 Numerical experiments
2.4.1 A reaction-diffusion problem with boundary layers
2.4.2 A reaction-diffusion problem with an interior layer
2.4.3 A convection-diffusion problem with characteristic layers and a Neumann outflow condition
2.4.4 A mesh that resolves only part of the exponential layer and neglects the weaker characteristic layers
3 Macro-interpolation on tensor product meshes
3.1 Introduction
3.2 Univariate C1-P2 macro-element interpolation
3.3 C1-Q2 macro-element interpolation on tensor product meshes
3.4 A theory on anisotropic macro-element interpolation
3.5 C1 macro-interpolation on anisotropic tensor product meshes
3.5.1 A reduced macro-element interpolation operator
3.5.2 The full C1-Q2 interpolation operator
3.5.3 A C1-Q2 macro-element quasi-interpolation operator of Scott-Zhang type on tensor product meshes
3.5.4 Summary: anisotropic C1 (quasi-)interpolation error estimates
3.6 An anisotropic macro-element of tensor product type
3.7 Application of macro-element interpolation on a tensor product Shishkin mesh
4 Balanced norm results for reaction-diffusion
4.1 The balanced finite element method of Lin and Stynes
4.2 A C0 interior penalty method
4.3 Galerkin finite element method
4.3.1 L2-norm error bounds and supercloseness
4.3.2 Maximum-norm error bounds
4.4 Numerical verification
4.5 Further developments and summary
References
|
270 |
Titre : Inégalités de martingales non commutatives et Applications / noncommunicative martingale inequalities and applicationsPerrin, Mathilde 05 July 2011 (has links)
Cette thèse présente quelques résultats de la théorie des probabilités non commutatives, et traite en particulier des inégalités de martingales dans des algèbres de von Neumann et de leurs espaces de Hardy associés. La première partie démontre un analogue non commutatif de la décomposition de Davis faisant intervenir la fonction carrée. Les arguments classiques de temps d'arrêt ne sont plus valides dans ce cadre, et la preuve se base sur une approche duale. Le deuxième résultat important de cette partie détermine ainsi le dual de l'espace de Hardy conditionnel h_1(M). Ces résultats sont ensuite étendus au cas 1<p<2. La deuxième partie transfère une décomposition atomique pour les espaces de Hardy h_1(M) et H_1(M) aux martingales non commutatives. Des résultats d'interpolation entre les espaces h_p(M) et bmo(M) sont également établis, relativement aux méthodes complexe et réelle d'interpolation. Les deux premières parties concernent des filtrations discrètes. Dans la troisième partie, on introduit des espaces de Hardy de martingales non commutatives relativement à une filtration continue. Les analogues des inégalités de Burkholder/Gundy et de Burkholder/Rosenthal sont obtenues dans ce cadre. La dualité de Fefferman-Stein ainsi que la décomposition de Davis sont également transférées avec succès à cette situation. Les preuves se basent sur des techniques d'ultraproduit et de L_p-modules. Une discussion sur une décomposition impliquant des atomes algébriques permet d'obtenir les résultats d'interpolation attendus / This thesis presents some results of the theory of noncommutative probability. It deals in particular with martingale inequalities in von Neumann algebras, and their associated Hardy spaces. The first part proves a noncommutative analogue of the Davis decomposition, involving the square function. The usual arguments using stopping times in the commutative case are no longer valid in this setting, and the proof is based on a dual approach. The second main result of this part determines the dual of the conditioned Hardy space h_1(M). These results are then extended to the case 1<p<2. The second part proves that an atomic decomposition for the Hardy spaces h_1(M) and H_1(M) is valid for noncommutative martingales. Interpolation results between the spaces h_p(M) and bmo(M) are also established, with respect to both complex and real interpolations. The two first parts concern discrete filtrations. In the third part, we introduce Hardy spaces of noncommutative martingales with respect to a continuous filtration. The analogues of the Burkholder/Gundy and Burkholder/Rosenthal inequalities are obtained in this setting. The Fefferman-Stein duality and the Davis decomposition are also successfully transferred to this situation. The proofs are based on ultraproduct techniques and L_p-modules. A discussion about a decomposition involving algebraic atoms gives the expected interpolation results
|
Page generated in 0.0537 seconds