221 |
Explicit algebraic subgrid-scale stress and passive scalar flux modeling in large eddy simulationRasam, Amin January 2011 (has links)
The present thesis deals with a number of challenges in the field of large eddy simulation (LES). These include the performance of subgrid-scale (SGS) models at fairly high Reynolds numbers and coarse resolutions, passive scalar and stochastic modeling in LES. The fully-developed turbulent channel flow is used as the test case for these investigations. The advantage of this particular test case is that highly accurate pseudo-spectral methods can be used for the discretization of the governing equations. In the absence of discretization errors, a better understanding of the subgrid-scale model performance can be achieved. Moreover, the turbulent channel flow is a challenging test case for LES, since it shares some of the common important features of all wall-bounded turbulent flows. Most commonly used eddy-viscosity-type models are suitable for moderately to highly-resolved LES cases, where the unresolved scales are approximately isotropic. However, this makes simulations of high Reynolds number wall-bounded flows computationally expensive. In contrast, explicit algebraic (EA) model takes into account the anisotropy of SGS motions and performs well in predicting the flow statistics in coarse-grid LES cases. Therefore, LES of high Reynolds number wall-bounded flows can be performed at much lower number of grid points in comparison with other models. A demonstration of the resolution requirements for the EA model in comparison with the dynamic Smagorinsky and its high-pass filtered version for a fairly high Reynolds number is given in this thesis. One of the shortcomings of the commonly used eddy diffusivity model arises from its assumption of alignment of the SGS scalar flux vector with the resolved scalar gradients. However, better SGS scalar flux models that overcome this issue are very few. Using the same methodology that led to the EA SGS stress model, a new explicit algebraic SGS scalar flux model is developed, which allows the SGS scalar fluxes to be partially independent of the resolved scalar gradient. The model predictions are verified and found to improve the scalar statistics in comparison with the eddy diffusivity model. The intermittent nature of energy transfer between the large and small scales of turbulence is often not fully taken into account in the formulation of SGS models both for velocity and scalar. Using the Langevin stochastic differential equation, the EA models are extended to incorporate random variations in their predictions which lead to a reasonable amount of backscatter of energy from the SGS to the resolved scales. The stochastic EA models improve the predictions of the SGS dissipation by decreasing its length scale and improving the shape of its probability density function. / QC 20110615
|
222 |
Simulace řízení asynchronního motoru s ohledem na vysokou účinnost / Simulation of induction machine control methods with respect to maximum efficiencyHanzlíček, Martin January 2021 (has links)
The diploma thesis deals with the simulation of induction motor control with respect to high efficiency. The theory of an induction motor is described here, with emphasis on its losses. Scalar and vector control are also described here. Vector control is optimized for higher efficiency. Subsequently, the creation of a model in the program MATLAB - Simulink is described here, for the comparison of vector control with and without optimization.
|
223 |
Cohérence discursive et implicatures conversationnelles : analyses empiriques et théoriques vers un modèle pragmatique à l'échelle de la conversationMeister, Fiona 08 1900 (has links)
Selon Asher (2013), la cohérence discursive force l’inférence de (1c)
dans les exemples (1a)-(1b), expliquant ainsi l’(in)acceptabilité des exemples.
(1) a. ‘John a un nombre pair d’enfants. Il en a 4.’
b. ‘ ? ?John a un nombre pair d’enfants. Il en a 3.’
c. +> John a n enfants et pas plus
Nous avons tenté de déterminer si les implicatures nécessaires au maintien
de la cohérence discursive sont systématiquement inférées en nous
appuyant sur les théories de la RST et de la SDRT.
Des tests linguistiques et la vérification du respect des contraintes sémantiques
associées aux relations de discours ont mis en évidence deux
catégories d’exemples contenant le quantificateur certains : ceux de type-
RenfNA, dont les implicatures ne sont pas nécessaires à la cohérence et
ceux de typeRenfA dans lesquels elles le sont. Nos tests ayant révélé que le
renforcement est nécessaire dans les exemples de typeRenfA, nous avons
conclu que les implicatures ne sont pas systématiquement inférées.
Nous avons tenté d’apporter une explication à ce phénomène en effectuant
des analyses de la structure discursive de nos exemples et avons
démontré que dans les exemples de typeRenfNA, les relations de discours
visent le constituant π∃ (certains), tandis que dans ceux de typeRenfA,
le constituant π¬∀ (mais pas tous) est visé.
Nos travaux ont démontré que les implicatures scalaires ne sont pas
systématiquement inférées rendant parfois leur renforcement obligatoire.
Nous avons également proposé un modèle à granularité fine prenant en
compte la structure discursive et la pragmatique afin d’expliquer ce phénomène. / According to Asher (2013), discourse coherence forces the inference of
(2c) in examples (2a)-(2b), thus explaining the (in)acceptability of these
examples.
(2) a. ‘John has an even number of children. He has 4.’
b. ‘??John has an even number of children. He has 3.’
c. +> John has n children and not more
We attempted to determine whether the implicatures that are necessary
to maintain discourse coherence are systematically inferred by drawing
on the theories of RST and SDRT.
Through linguistic tests and checking the respect of semantic constraints
associated with discourse relations, we identified two categories
of examples containing the quantifier some: typeRenfNA examples, in
which implicatures are not necessary for discourse coherence, and typeRenfA
examples in which they are. As our tests revealed that reinforcement
is necessary in typeRenfA examples, we concluded that implicatures are
not systematically inferred.
We then attempted to explain this phenomenon. We performed analyses
of the discourse structure of our examples and showed that in typeRenfNA
examples, the discourse relations target the π∃ (some) constituent,
while in typeRenfA examples, the π¬∀ (but not all) constituent
is targeted.
Thus, our work has shown that scalar implicatures are not systematically
inferred, making implicature reinforcement sometimes mandatory.
We also proposed a fine-grained model taking discourse structure and
pragmatics into account to explain this phenomenon.
|
224 |
Experimental study of passive scalar mixing in swirling jet flowsÖrlü, Ramis January 2006 (has links)
Despite its importance in various industrial applications there is still a lack of experimental studies on the dynamic and thermal field of swirling jets in the near-field region. The present study is an attempt to close this lack and provide new insights on the effect of rotation on the turbulent mixing of a passive scalar, on turbulence (joint) statistics as well as the turbulence structure. Swirl is known to increase the spreading of free turbulent jets and hence to entrain more ambient fluid. Contrary to previous experiments, which leave traces of the swirl generating method especially in the near-field, the swirl was imparted by discharging a slightly heated air flow from an axially rotating and thermally insulated pipe (6 m long, diameter 60 mm). This gives well-defined axisymmetric streamwise and azimuthal velocity distributions as well as a well-defined temperature profile at the jet outlet. The experiments were performed at a Reynolds number of 24000 and a swirl number (ratio between the angular velocity of the pipe wall and the bulk velocity in the pipe) of 0.5. By means of a specially designed combined X-wire and cold-wire probe it was possible to simultaneously acquire the instantaneous axial and azimuthal velocity components as well as the temperature and compensate the former against temperature variations. The comparison of the swirling and non-swirling cases clearly indicates a modification of the turbulence structure to that effect that the swirling jet spreads and mixes faster than its non-swirling counterpart. It is also shown that the streamwise velocity and temperature fluctuations are highly correlated and that the addition of swirl drastically increases the streamwise passive scalar flux in the near field. / QC 20101124
|
225 |
Development of an Interpolation-Free Sharp Interface Immersed Boundary Method for General CFD SimulationsKamau, Kingora 08 1900 (has links)
Immersed boundary (IB) methods are attractive due to their ability to simulate flow over complex geometries on a simple Cartesian mesh. Unlike conformal grid formulation, the mesh does not need to conform to the shape and orientation of the boundary. This eliminates the need for complex mesh and/or re-meshing in simulations with moving/morphing boundaries, which can be cumbersome and computationally expensive. However, the imposition of boundary conditions in IB methods is not straightforward and numerous modifications and refinements have been proposed and a number of variants of this approach now exist. In a nutshell, IB methods in the literature often suffer from numerical oscillations, implementation complexity, time-step restriction, burred interface, and lack of generality. This limits their ability to mimic conformal grid results and enforce Neumann boundary conditions. In addition, there is no generic IB capable of solving flow with multiple potentials, closely/loosely packed structures as well as IBs of infinitesimal thickness. This dissertation describes a novel 2$ ^{\text{nd}} $ order direct forcing immersed boundary method designed for simulation of two- and three-dimensional incompressible flow problems with complex immersed boundaries. In this formulation, each cell cut by the IB is reshaped to conform to the shape of the IB. IBs are modeled as a series of 2D planes in 3D space that connect seamlessly at the edges of the cut cells, in a way that mimics conformal grid. IBs are represented in a continuous and consistent fashion from one cell to another, thus eliminating spatial pressure oscillations originating from inconsistent description of the IB as well as the traditional stair-step problem, leading to a more accurate resolution of the boundary layer. Boundary conditions are enforced at the exact location of the IB devoid of interpolation, which guarantees sound simulations even on grids with high aspect ratio, and enables simulations of flow packed with multiple IBs in close proximity. Boundary conditions for each phase across the IB are enforced independently, yielding a unique capability to solve flows with zero-thickness IBs. Simulations of a large number of 2D and 3D test cases confirm the prowess of the devised immersed boundary method in solving flows over multiple loosely/closely-packed IBs; stationary, moving and highly morphing IBs; as well as IBs with zero-thickness. Extension of the proposed scheme to solve flow with multiple potentials is demonstrated by simulating transfer and transport of a passive scalar from an array of side-by-side and tandem cylinders in cross-flow. Aquatic vegetation represented by a colony of circular cylinders with low to high solid fraction is simulated to showcase the prowess of the current numerical technique in solving flow with closely packed structures. Aquatic vegetation studies are extended to a colony of flat plates with different orientations to show the capability of the developed method in modeling zero-thickness structures.
|
226 |
Vehicle Conceptualisation, Compactness, and Subsystem Interaction : A network approach to design and analyse the complex interdependencies in vehiclesAbburu, Sai Kausik January 2023 (has links)
The conventional approach to vehicle design is restrictive, limited, andbiased. This often leads to sub-optimal utilisation of vehicle capabilities and allocated resources and ultimately entails the repercussions of designing andlater on an using an inefficient vehicle. To overcome these limitations, it is important to gain a deeper understanding of the interaction effects at component,subsystem, and system level. In this thesis, the research is focused on identifying appropriate methods and developing robust models to facilitate the interaction analysis. To scrutinise and identify appropriate methods, criteria were developed.Initially, the Design Structure Matrix (DSM) and its variations were examined.While DSM proved to be fundamental for capturing interaction effects,it lacked the ability to answer questions about the structure and behaviour ofinteractions and to predict unintended effects. Therefore, network theory wasexplored as a complementary method to DSM which was capable of providing insights into interaction structures and identifying influential variables. Subsequently, two criteria were established to identify subsystems significant to interaction analysis: high connectivity to other subsystems and multidisciplinary composition. The traction motor was observed to satisfyboth criteria as it had higher connectivity with other subsystems and was composed of multiple disciplines. Therefore, a detailed model of an induction motor was developed to enable the interaction analysis. The induction motor model was integrated into a cross-scalar design tool.The tool employed a two-step process: translating operational parametersto motor inputs using Newtonian equations and deriving physical attributes,performance characteristics, and performance attributes of the motor. Comparing the obtained performance characteristics curve against existing studiesvalidated the model’s reliability and capabilities. The design tool demonstrated adaptability to different drive cycles and the ability to modify motor performance without affecting operational parameters. Thus validating the capability of the design tool to capture cross-scalar and intra-subsystem interaction effects. To examine inter-subsystem interaction, a thermal model of an inverter was developed, capturing temperature variations in the power electronics based on motor inputs. The design tool successfully captured interaction effects between motor and inverter designs, highlighting the interplay with operational parameters. Thus, this thesis identifies methods for interaction analysis and develops robust subsystem models. The integrated design tool effectively captures intra-subsystem, inter-subsystem, and cross-scalar interaction effects. The research presented contributes to the overarching project goal of developing methods and models that capture interaction effects and in turn serve as a guiding tool for designers to understand the consequences of their design choices. / Det konventionella tillvägagångssättet för fordonsdesign är restriktiv, begränsat och partiskt. Detta leder ofta till en suboptimal användning av fordonets kapacitet och tilldelade resurser och innebär i slutändan att konsekvenserna blir att använda ett ineffektivt fordon. För att övervinna dessa begränsningar är det viktigt att få en djupare förståelse för interaktionseffekterna på komponent-, delsystem- och systemsnivå. I denna avhandling fokuserar forskningen på att identifiera lämpliga metoder och utveckla robusta modeller för att underlätta interaktionsanalysen. För att granska och identifiera lämpliga metoder utvecklades kriterier. Till att börja med undersöktes Design Structure Matrix (DSM) och dess variationer. Medan DSM visade sig vara grundläggande för att fånga interaktionseffekter, saknade den förmågan att besvara frågor om interaktionsstrukturer och beteende samt förutsäga oavsiktliga effekter. Därför utforskades nätverksteori som en kompletterande metod till DSM, vilket kunde ge insikter i interaktionsstrukturer och identifiera inflytelserika variabler. Därefter etablerades två kriterier för att identifiera delsystem som är betydelsefulla för interaktionsanalysen: hög anslutning till andra delsystem och mångdisciplinär sammansättning. Dragkraftmotorn observerades uppfylla båda kriterierna eftersom den hade högre anslutning till andra delsystem och var sammansatt av flera discipliner. Därför utvecklades en detaljerad modell av en induktionsmotor för att möjliggöra interaktionsanalysen. Induktionsmotormodellen integrerades i ett tvärskaligt designverktyg. Verktyget använde en tvåstegsprocess: att översätta operativa parametrar till motorinsatser med hjälp av Newtons ekvationer och härleda fysiska egenskaper, prestandakaraktäristik och prestandaattribut hos motorn. Jämförelse av den erhållna prestandakaraktäristikkurvan med befintliga studier validerade modellens tillförlitlighet och förmågor. Designverktyget visade anpassningsbarhet till olika körcykler och förmågan att modifiera motorprestanda utan att påverka operativa parametrar. Detta validerade designverktygets förmåga att fånga tvärskaliga och intra-subsystem interaktionseffekter. För att undersöka inter-subsysteminteraktion utvecklades en termisk modell av en inverter, som fångade temperaturvariationer i kraftelektroniken baserat på motorns styrning. Designverktyget fångade framgångsrikt interaktionseffekter mellan motor- och inverterdesign och belyste samspelet med operativa parametrar. Därmed identifierar denna avhandling metoder för interaktionsanalys och utvecklar robusta delsystemmodeller. Det integrerade designverktyget fångar effektivt intra-subsystem-, inter-subsystem- och tvärskaliga interaktionseffekter. Den presenterade forskningen bidrar till det övergripande projektets mål att utveckla metoder och modeller som fångar interaktionseffekter och i sin tur fungerar som ett vägledande verktyg för designers att förstå konsekvenserna av sina designval. / <p>QC 231003</p>
|
227 |
Scalar Sector Extension and Physics Beyond Standard Model / スカラーセクターの拡張と素粒子標準模型を超えた物理Abe, Yoshihiko 23 March 2022 (has links)
京都大学 / 新制・課程博士 / 博士(理学) / 甲第23698号 / 理博第4788号 / 新制||理||1685(附属図書館) / 京都大学大学院理学研究科物理学・宇宙物理学専攻 / (主査)准教授 福間 將文, 准教授 吉岡 興一, 教授 萩野 浩一 / 学位規則第4条第1項該当 / Doctor of Science / Kyoto University / DFAM
|
228 |
Critical Points of Uncertain Scalar Fields: With Applications to the North Atlantic OscillationVietinghoff, Dominik 29 May 2024 (has links)
In an era of rapidly growing data sets, information reduction techniques such as extracting and highlighting characteristic features, are becoming increasingly important for efficient data analysis. Particularly relevant features of scalar fields are their critical points since they mark locations in the domain where a field's level set undergoes fundamental topological changes. There are well-established methods for locating and relating such points in a deterministic setting. However, many real-world phenomena studied in the computational sciences today are the result of a chaotic system that cannot be fully described by a single scalar field. Instead, the variability of such systems is typically captured with ensemble simulations, which generate a variety of possible outcomes of the simulated process. The topological analysis of such ensemble data sets, and uncertain data in general, is less well studied. In particular, there is no established definition for critical points of uncertain scalar fields. This thesis therefore aims to generalize the concept of critical points to uncertain scalar fields. While a deterministic field has a single set of critical points, each outcome of an uncertain scalar field has its own set of critical points. A first step towards finding an appropriate analog for critical points in uncertain data is to look at the distribution of all these critical points. In this work, different methods for analyzing this distribution are presented, which identify and track the likely locations of critical points over time, estimate their local occurrence probabilities, and eventually characterize their spatial uncertainty.
A driving factor of winter weather in western Europe is the North Atlantic Oscillation (NAO), which is manifested by fluctuations in the sea level pressure difference between the Icelandic Low and the Azores High. Several methods have been developed to describe the strength of this oscillation. Some of them are based on certain assumptions, such as fixed positions of these two pressure systems. It is possible, however, that climate change will affect the locations of the main pressure variations and thus the validity of these descriptive methods. An alternative approach is based on the leading empirical orthogonal function (EOF) computed from the sea level pressure fields over the North Atlantic. The critical points of these fields indicate the actual locations of maximum pressure variations and can thus be used to assess how climate change affects these locations and to evaluate the validity of methods that use fixed locations to characterize the strength of the NAO. Because the climate is described by a chaotic system, such an analysis should incorporate the uncertain nature of climate predictions to produce statistically robust results. Extracting and tracking the positions of the maximum pressure variations that characterize the NAO therefore serves as a motivating practical application for the study of critical points in uncertain data in this work.
Because uncertain data tend to be noisy, filtering is often required to separate relevant signals of variation from irrelevant fluctuations. A well-established method for extracting dominant signals from a time series of fields is to compute its empirical orthogonal functions (EOFs). In the first part of this thesis, this concept is extended to the analysis of spatiotemporal ensemble data sets to decompose their variation into modes describing the variation in the ensemble direction and modes describing the variation in the time direction. An application to different climate data sets revealed that, depending on the way an ensemble has been generated, temporal and ensemble-wise variations are not necessarily independent, making it difficult to separate these signals.
Next, a computational pipeline for tracking likely locations of critical points in ensembles of scalar fields is presented. It computes leading EOFs on sliding time windows for all ensemble members, extracts regions where critical points can be expected from the resulting ensembles of EOFs for every time window, and finally tracks the barycenters of these regions over time. An application of this pipeline to sea level pressure fields over the North Atlantic revealed systematic shifts in the locations of the maximum pressure variations that characterize the NAO. These found shift were more pronounced for more extreme climate change scenarios.
Existing methods for the identification of critical points in ensembles of scalar fields do not distinguish between uncertainties that are inherent in the analyzed system itself and those that are additionally introduced by using a finite sample of fields to capture these variations. In the next part of this thesis, two approaches for estimating the occurrence probabilities of critical points are presented that explicitly take into account and communicate to the viewer the additional uncertainties caused by estimating these probabilities from finite-sized ensembles. A comparison with existing works on synthetic data demonstrates the added value of the new approaches.
The last part of this thesis is devoted to the question of how to characterize the spatial uncertainty of critical points. It provides a sound mathematical formulation of the problem of finding critical points with spatial uncertainty and computing their spatial distribution. This ultimately leads to the notion of uncertain critical points as a generalization of critical points to uncertain scalar fields. An analysis of the theoretical properties of these structures gave conditions under which well-interpretable results can be obtained and revealed interpretational difficulties when these conditions are not met. / In Zeiten immer größerer Datensätze gewinnen Techniken zur Informationsreduktion, etwa die Extraktion und Hervorhebung charakteristischer Merkmale, zunehmend an Bedeutung für eine effiziente Datenanalyse. Besonders relevante Merkmale von Skalarfeldern sind ihre kritischen Punkte, da sie Orte in der Domäne kennzeichnen, an denen sich die Topologie der Niveaumenge eines Feldes grundlegend verändert. Es existieren etablierte Methoden, um diese Punkte in deterministischen Feldern zu lokalisieren und sie miteinander in Beziehung zu setzen. Viele Alltagsphänomene, die heute untersucht werden, sind jedoch das Ergebnis chaotischer Systeme, die sich nicht vollständig durch ein einzelnes Skalarfeld beschreiben lassen. Stattdessen wird die Variabilität solcher Systeme mit Ensemblesimulationen erfasst, die eine Vielzahl möglicher Ergebnisse des simulierten Prozesses erzeugen. Die topologische Analyse solcher Ensemble-Datensätze und unsicherer Daten im Allgemeinen ist bisher weniger gut erforscht. Insbesondere gibt es noch keine etablierte Definition für die kritischen Punkte von unsicheren Skalarfeldern. In dieser Dissertation wird daher eine Verallgemeinerung des Konzepts kritischer Punkte auf unsichere Skalarfelder angestrebt. Während ein deterministisches Feld einen einzigen Satz kritischer Punkte hat, hat jede Realisierung eines unsicheren Skalarfeldes ihre eigenen kritischen Punkte. Ein erster Schritt, um ein geeignetes Analogon für kritische Punkte in unsicheren Daten zu finden, besteht darin, die Verteilung all dieser kritischen Punkte zu untersuchen. Zu diesem Zweck werden in dieser Arbeit verschiedene Methoden vorgestellt, die es ermöglichen, die wahrscheinlichen Orte kritischer Punkte zu identifizieren und über die Zeit zu verfolgen, die lokalen Wahrscheinlichkeiten für das Auftreten kritischer Punkte zu schätzen und schließlich die räumliche Unsicherheit von kritischen Punkten zu charakterisieren.
Ein bestimmender Faktor für das Winterwetter in Westeuropa ist die Nordatlantische Oszillation (NAO), die sich in Schwankungen des Druckunterschieds auf Meereshöhe zwischen dem Islandtief und dem Azorenhoch äußert. Es existieren unterschiedliche Methoden, um die Stärke dieser Oszillation zu beschreiben, von denen einige auf bestimmten Annahmen beruhen, wie etwa der fixen Position der beiden Drucksysteme. Es ist jedoch möglich, dass der Klimawandel die Lage der Hauptdruckschwankungen und somit die Gültigkeit dieser Beschreibungsmethoden beeinträchtigt. Ein alternativer Ansatz basiert auf der führenden empirischen Orthogonalfunktion (EOF), welche aus den Druckfeldern auf Meereshöhe über dem Nordatlantik berechnet wird. Die kritischen Punkte dieses Feldes entsprechen den tatsächlichen Orten maximaler Druckschwankungen. Sie können daher verwendet werden, um die Auswirkungen des Klimawandels auf diese Orte zu bewerten und dadurch die Gültigkeit von Methoden, die feste Positionen zur Charakterisierung der Stärke der NAO verwenden, zu beurteilen. Da das Klima durch ein chaotisches System beschrieben wird, sollte eine solche Analyse die Unsicherheit von Klimavorhersagen berücksichtigen, um statistisch zuverlässige Ergebnisse zu erhalten. Die Extraktion und Verfolgung der für die NAO charakteristischen Positionen maximaler Druckschwankungen dient daher als motivierende praktische Anwendung für die Untersuchung kritischer Punkte in unsicheren Daten in dieser Arbeit.
Da unsichere Daten oft verrauscht sind, ist meist zunächst eine Filterung erforderlich, um relevante Signale von irrelevanten Fluktuationen zu trennen. Ein etabliertes Konzept zur Extraktion dominanter Signale aus Zeitreihen von Skalarfeldern ist die empirische Orthogonalfunktionsanalyse (EOF-Analyse). Im ersten Teil dieser Arbeit wird dieses Konzept auf die Analyse von zeitabhängigen Ensemble-Datensätzen erweitert, um deren Variation in Moden zu zerlegen, die die jeweiligen Schwankungen in Ensemble- und Zeitrichtung beschreiben. Eine Anwendung auf verschiedene Klimadatensätze hat gezeigt, dass je nachdem, wie ein Ensemble generiert wurde, zeitliche und ensemblebezogene Variationen nicht zwangsläufig unabhängig sind, was eine Trennung dieser Signale erschwert.
Im weiteren wird eine Berechnungspipeline zur Verfolgung der wahrscheinlichen Positionen kritischer Punkte in Ensemblen von Skalarfeldern vorgestellt. Sie berechnet zunächst die führenden EOFs auf gleitenden Zeitfenstern für jedes Ensemblemitglied, extrahiert dann aus den resultierenden Ensemblen von EOFs an jedem Zeitfenster Regionen, in denen kritische Punkte zu erwarten sind, und verfolgt schließlich die Baryzentren dieser Regionen über die Zeit. Die Anwendung dieser Pipeline auf die nordatlantischen Meeresspiegeldruckfelder hat eine systematische Verschiebungen der für die NAO charakteristischen Orte der maximalen Druckvariationen offenbart. Dabei führten extremere Klimawandelszenarien zu stärkeren Verschiebungen.
Vorhandene Methoden zur Identifikation von kritischen Punkten in Ensemblen von Skalarfeldern unterscheiden nicht zwischen Unsicherheiten, die dem analysierten System selbst innewohnen, und solchen, die durch die Verwendung einer endlichen Stichprobe von Feldern zur Erfassung dieser Variationen zusätzlich verursacht werden. Im nächsten Teil dieser Arbeit werden daher zwei Ansätze zur Schätzung der Auftrittswahrscheinlichkeiten kritischer Punkte vorgestellt, die explizit auch die zusätzlichen Unsicherheiten berücksichtigen, die durch die Schätzung dieser Wahrscheinlichkeiten aus endlichen Ensemblen entstehen, und diese an den Betrachter kommunizieren. Der Mehrwert der neuen Verfahren wurde in einem Vergleich mit bestehenden Arbeiten auf synthetischen Daten demonstriert.
Der letzte Teil dieser Arbeit ist der Frage gewidmet, wie sich die räumliche Unsicherheit kritischer Punkte charakterisieren lässt. Es wird eine fundierte mathematische Formulierung des Problems der Suche nach kritischen Punkten mit räumlicher Unsicherheit und der Berechnung ihrer räumlichen Verteilung erbracht. Das führt schließlich zum Begriff unsicherer kritischer Punkte als Verallgemeinerung von kritischen Punkten auf unsichere Skalarfelder. Eine Analyse der theoretischen Eigenschaften dieser Strukturen hat Bedingungen ergeben, unter denen einfach zu interpretierende Ergebnisse erzielt werden können, und offenbarte Interpretationsschwierigkeiten, die entstehen, wenn diese Bedingungen nicht erfüllt sind.
|
229 |
Visual Analysis of High-Dimensional Point Clouds using Topological AbstractionOesterling, Patrick 17 May 2016 (has links) (PDF)
This thesis is about visualizing a kind of data that is trivial to process by computers but difficult to imagine by humans because nature does not allow for intuition with this type of information: high-dimensional data. Such data often result from representing observations of objects under various aspects or with different properties. In many applications, a typical, laborious task is to find related objects or to group those that are similar to each other. One classic solution for this task is to imagine the data as vectors in a Euclidean space with object variables as dimensions. Utilizing Euclidean distance as a measure of similarity, objects with similar properties and values accumulate to groups, so-called clusters, that are exposed by cluster analysis on the high-dimensional point cloud. Because similar vectors can be thought of as objects that are alike in terms of their attributes, the point cloud\'s structure and individual cluster properties, like their size or compactness, summarize data categories and their relative importance. The contribution of this thesis is a novel analysis approach for visual exploration of high-dimensional point clouds without suffering from structural occlusion. The work is based on implementing two key concepts: The first idea is to discard those geometric properties that cannot be preserved and, thus, lead to the typical artifacts. Topological concepts are used instead to shift away the focus from a point-centered view on the data to a more structure-centered perspective. The advantage is that topology-driven clustering information can be extracted in the data\'s original domain and be preserved without loss in low dimensions. The second idea is to split the analysis into a topology-based global overview and a subsequent geometric local refinement. The occlusion-free overview enables the analyst to identify features and to link them to other visualizations that permit analysis of those properties not captured by the topological abstraction, e.g. cluster shape or value distributions in particular dimensions or subspaces. The advantage of separating structure from data point analysis is that restricting local analysis only to data subsets significantly reduces artifacts and the visual complexity of standard techniques. That is, the additional topological layer enables the analyst to identify structure that was hidden before and to focus on particular features by suppressing irrelevant points during local feature analysis. This thesis addresses the topology-based visual analysis of high-dimensional point clouds for both the time-invariant and the time-varying case. Time-invariant means that the points do not change in their number or positions. That is, the analyst explores the clustering of a fixed and constant set of points. The extension to the time-varying case implies the analysis of a varying clustering, where clusters appear as new, merge or split, or vanish. Especially for high-dimensional data, both tracking---which means to relate features over time---but also visualizing changing structure are difficult problems to solve.
|
230 |
Etude ab initio des effets de corrélation et des effets relativistes dans les halogénures diatomiques de métaux de transition/ Ab initio study of the correlation and relativistic effects in diatomic halides containing a transition metal.Rinskopf, Nathalie D. D. 07 September 2007 (has links)
Ce travail est une contribution ab initio à la caractérisation d'halogénures diatomiques de métaux de transition. Nous avons choisi de caractériser la structure électronique des chlorures de métaux de transition du groupe Vb (NbCl et TaCl) et du fluorure de nickel car une série de spectres les concernant ont été enregistrés mais aucune donnée théorique fiable n'était disponible dans la littérature.
Pour étudier ces molécules, nous avons appliqué une procédure de calcul à deux étapes qui permet de tenir compte des effets de corrélation électronique et des effets relativistes. Dans la première étape, nous avons réalisé des calculs CASSCF/ICMRCI+Q de grande taille qui tiennent compte de l'énergie de corrélation et introduisent des effets relativistes scalaires. Dans la deuxième étape, le couplage spin-orbite est traité par la "state interacting method" implémentée dans le logiciel MOLPRO. Nous avons développé des stratégies de calcul basées sur ces méthodes de calcul et adaptées aux différentes molécules ciblées. Ainsi, pour les molécules NbCl et TaCl, nous avons utilisé des pseudopotentiels relativistes scalaires et spin-orbite, tandis que pour la molécule NiF, nous avons réalisé des calculs tous électrons.
Nous avons d'abord testé la stratégie de calcul sur les cations Nb+ et Ta+. Ensuite, nous avons calculé pour la première fois les structures électroniques relativiste scalaire et spin-orbite des molécules NbCl (de 0 à 17000 cm-1) et TaCl (de 0 à 23000 cm-1). A l'aide de ces données théoriques, nous avons interprété les spectres expérimentaux en collaboration avec Bernath et al. Nous avons proposé plusieurs attributions de transitions électroniques en accord avec l'expérience mais nos résultats théoriques ne nous ont pas permis de les attribuer toutes. Néanmoins, nous avons mis en évidence une série d'autres transitions électroniques probables qui pourraient, à l'avenir, servir à l'interprétation de nouveaux spectres mieux résolus.
Outre son intérêt expérimental, cette étude a permis de comparer les structures électroniques des molécules isovalencielles VCl, NbCl et TaCl, mettant en évidence des différences importantes.
L'élaboration d'une nouvelle stratégie de calcul pour décrire les systèmes contenant l'atome de nickel représentait un véritable défi en raison de la complexité des effets de corrélation électronique. Notre stratégie de calcul a consisté à introduire ces effets en veillant à réduire au maximum la taille des calculs qui devenait considérable.
Nous l'avons testée sur l'atome Ni et appliquée ensuite au calcul des structures électroniques relativiste scalaire et spin-orbite de la molécule NiF entre 0 à 2500 cm-1. Nous avons obtenus des résultats qui corroborent l'expérience.
|
Page generated in 0.2419 seconds