51 |
Symplectic integration of constrained Hamiltonian systemsLeimkuhler, Benedict, Reich, Sebastian January 1994 (has links)
A Hamiltonian system in potential form (formula in the original abstract) subject to smooth constraints on q can be viewed as a Hamiltonian system on a manifold, but numerical computations must be performed in Rn. In this paper methods which reduce "Hamiltonian differential algebraic equations" to ODEs in Euclidean space are examined. The authors study the construction of canonical parameterizations or local charts as well as methods based on the construction of ODE systems in the space in which the constraint manifold is embedded which preserve the constraint manifold as an invariant manifold. In each case, a Hamiltonian system of ordinary differential equations is produced. The stability of the constraint invariants and the behavior of the original Hamiltonian along solutions are investigated both numerically and analytically.
|
52 |
Strong Stability Preserving Hermite-Birkhoff Time Discretization MethodsNguyen, Thu Huong 06 November 2012 (has links)
The main goal of the thesis is to construct explicit, s-stage, strong-stability-preserving (SSP) Hermite–Birkhoff (HB) time discretization methods of order p with nonnegative coefficients for the integration of hyperbolic conservation laws. The Shu–Osher form and the canonical Shu–Osher form by means of the vector formulation for SSP Runge–Kutta (RK) methods are extended to SSP HB methods. The SSP coefficients of k-step, s-stage methods of order p, HB(k,s,p), as combinations of k-step methods of order (p − 3) with s-stage explicit RK methods of order 4, and k-step methods of order (p-4) with s-stage explicit RK methods of order 5, respectively, for s = 4, 5,..., 10 and p = 4, 5,..., 12, are constructed and compared
with other methods. The good efficiency gains of the new, optimal, SSP HB methods over other SSP
methods, such as Huang’s hybrid methods and RK methods, are numerically shown by means of their effective SSP coefficients and largest effective CFL numbers. The formulae of these new, optimal methods are presented in their Shu–Osher form.
|
53 |
Locally Mass-Conservative Method With Discontinuous Galerkin In Time For Solving Miscible Displacement Equations Under Low RegularityLi, Jizhou 16 September 2013 (has links)
The miscible displacement equations provide the mathematical model for simulating the displacement of a mixture of oil and miscible fluid in underground reservoirs during the Enhance Oil Recovery(EOR) process.
In this thesis, I propose a stable numerical scheme combining a mixed finite element method and space-time discontinuous Galerkin method for solving miscible displacement equations under low regularity assumption.
Convergence of the discrete solution is investigated using a compactness theorem for functions that are discontinuous in space and time.
Numerical experiments illustrate that the rate of convergence is improved by using a high order time stepping method.
For petroleum engineers, it is essential to compute finely detailed fluid profiles in order to design efficient recovery procedure thereby increase production in the EOR process.
The method I propose takes advantage of both high order time approximation and discontinuous Galerkin method in space and is capable of providing accurate numerical solutions to assist in increasing the production rate of the miscible displacement oil recovery process.
|
54 |
Nonlinearly consistent schemes for coupled problems in reactor analysisMahadevan, Vijay Subramaniam 25 April 2007 (has links)
Conventional coupling paradigms used nowadays to couple various physics
components in reactor analysis problems can be inconsistent in their treatment of the
nonlinear terms. This leads to usage of smaller time steps to maintain stability and
accuracy requirements thereby increasing the computational time. These inconsistencies
can be overcome using better approximations to the nonlinear operator in a time stepping
strategy to regain the lost accuracy.
This research aims at finding remedies that provide consistent coupling and time
stepping strategies with good stability properties and higher orders of accuracy.
Consistent coupling strategies, namely predictive and accelerated methods, were
introduced for several reactor transient accident problems and the performance was
analyzed for a 0-D and 1-D model. The results indicate that consistent approximations
can be made to enhance the overall accuracy in conventional codes with such simple nonintrusive
techniques.
A detailed analysis of a monoblock coupling strategy using time adaptation was also
implemented for several higher order Implicit Runge-Kutta (IRK) schemes. The
conclusion from the results indicate that adaptive time stepping provided better accuracy
and reliability in the solution fields than constant stepping methods even during
discontinuities in the transients. Also, the computational and the total memory
requirements for such schemes make them attractive alternatives to be used for
conventional coupling codes.
|
55 |
Kontrolle semilinearer elliptischer Randwertprobleme mit variationeller DiskretisierungMatthes, Ulrich 06 April 2010 (has links) (PDF)
Steuerungsprobleme treten in vielen Anwendungen in Naturwissenschaft und Technik auf. In dieser Arbeit werden Optimalsteuerungsprobleme mit semilinearen elliptischen partiellen Differentialgleichungen als Nebenbedingungen untersucht. Die Kontrolle wird durch Kontrollschranken als Ungleichungsnebenbedingungen eingeschränkt.
Dabei ist die Zielfunktion quadratisch in der Kontrolle. Die Lösung des Optimierungsproblems kann dann durch die Projektionsbedingung mit Hilfe des adjungierten Zustandes dargestellt werden.
Ein neuer Zugang ist die variationelle Diskretisierung. Bei dieser wird nur der Zustand und der adjungierte Zustand diskretisiert, nicht aber der Raum der Kontrollen. Dieser Zugang erlaubt höhere Konvergenzraten für die Kontrolle für kontrollrestingierte Probleme als bei einer Diskretisierung des Kontrollraumes. Die Projektionsbedingung für das variationell diskretisierte Problem ist dabei auf die gleiche zulässige Menge wie beim nicht diskretisierten Problem.
In der vorliegenden Arbeit wird die Methode der variationellen Diskretisierung auf semilineare elliptische Optimalkontrollprobleme angewendet und Fehlerabschätzungen für die Kontrollen bewiesen. Dabei wird hauptsächlich auf die verteilte Steuerung Wert gelegt, aber auch die Neumann-Randsteuerung mitbehandelt.
Nach einem Überblick über die Literatur wird die Aufgabenstellung mit den Voraussetzungen aufgeschrieben und die Optimalitätsbedingungen angegeben.
Danach wird die Existenz einer Lösung, sowie die Konvergenz der diskreten Lösungen gegen eine kontinuierliche Lösung gezeigt. Außerdem werden Finite-Elemente-Konvergenzordnungen angegeben.
Dann werden optimale Fehlerabschätzungen in verschiedenen Normen für die variationelle Kontrolle bewiesen.
Insbesondere werden die Fehlerabschätzung in Abhängigkeit vom Finite-Elemente-Fehler des Zustandes und des adjungierten Zustandes angegeben.
Dabei wird die nichtlineare Fixpunktgleichung mittels semismooth Newtonverfahrens linearisiert. Das Newtonverfahren wird auch für die numerische Lösung des Problems eingesetzt. Die Voraussetzung für die Konvergenzordnung ist dabei nicht die SSC, die hinreichende Bedingung zweiter Ordnung, welche eine lokale Konvexität in der Zielfunktion impliziert, sondern die Invertierbarkeit des Newtonoperators. Dies ist eine stationäre Bedingung in der optimalen Kontrolle.
Dabei wird nur benötigt, dass der Rand der aktiven Menge eine Nullmenge ist und die Invertierbarkeit des Newtonoperators in der Optimallösung.
Der Schaudersche Fixpunktsatz wird benutzt, um für die Newtongleichung die Existenz eines Fixpunktes innerhalb der gewünschten Umgebung zu beweisen. Außerdem wird die Eindeutigkeit eines solchen Fixpunktes für eine gegebene Triangulation bei hinreichend feiner Diskretisierung gezeigt.
Das Ergebnis ist, dass die Konvergenzrate nur durch die Finite-Elemente-Konvergenzraten von Zustand und adjungiertem Zustand beschränkt wird. Diese Rate wird nicht nur durch die Ansatzfunktionen, sondern auch durch die Glattheit der rechten Seite beschränkt, so dass der Knick am Rand der aktiven Menge hier ein Grenze setzt.
Außerdem wird die Implementation des semismooth Newtonverfahrens für den unendlichdimensionalen Kontrollraum für die variationelle Diskretisierung erläutert. Dabei wird besonders auf den zweidimensionalen verteilten Fall eingegangen.
Es werden die bewiesenen Konvergenzraten an einigen semilinearen und linearen Beispielen mittels der variationellen Diskretisierung demonstriert.
Es entsprechen sich die bei den analytische Beweisen und der numerischen Lösung eingesetzten Verfahren, die Fixpunktiteration sowie das nach Kontrolle oder adjungiertem Zustand aufgelöste Newtonverfahren. Dabei sind einige Besonderheiten bei der Implementation zu beachten, beispielsweise darf die Kontrolle nicht inkrementell mit dem Newtonverfahren oder der Fixpunktiteration aufdatiert werden, sondern muss in jedem Schritt neu berechnet werden.
|
56 |
Modeling of Magnetic Fields and Extended Objects for Localization ApplicationsWahlström, Niklas January 2015 (has links)
The level of automation in our society is ever increasing. Technologies like self-driving cars, virtual reality, and fully autonomous robots, which all were unimaginable a few decades ago, are realizable today, and will become standard consumer products in the future. These technologies depend upon autonomous localization and situation awareness where careful processing of sensory data is required. To increase efficiency, robustness and reliability, appropriate models for these data are needed.In this thesis, such models are analyzed within three different application areas, namely (1) magnetic localization, (2) extended target tracking, and (3) autonomous learning from raw pixel information. Magnetic localization is based on one or more magnetometers measuring the induced magnetic field from magnetic objects. In this thesis we present a model for determining the position and the orientation of small magnets with an accuracy of a few millimeters. This enables three-dimensional interaction with computer programs that cannot be handled with other localization techniques. Further, an additional model is proposed for detecting wrong-way drivers on highways based on sensor data from magnetometers deployed in the vicinity of traffic lanes. Models for mapping complex magnetic environments are also analyzed. Such magnetic maps can be used for indoor localization where other systems, such as GPS, do not work. In the second application area, models for tracking objects from laser range sensor data are analyzed. The target shape is modeled with a Gaussian process and is estimated jointly with target position and orientation. The resulting algorithm is capable of tracking various objects with different shapes within the same surveillance region. In the third application area, autonomous learning based on high-dimensional sensor data is considered. In this thesis, we consider one instance of this challenge, the so-called pixels to torques problem, where an agent must learn a closed-loop control policy from pixel information only. To solve this problem, high-dimensional time series are described using a low-dimensional dynamical model. Techniques from machine learning together with standard tools from control theory are used to autonomously design a controller for the system without any prior knowledge. System models used in the applications above are often provided in continuous time. However, a major part of the applied theory is developed for discrete-time systems. Discretization of continuous-time models is hence fundamental. Therefore, this thesis ends with a method for performing such discretization using Lyapunov equations together with analytical solutions, enabling efficient implementation in software. / Hur kan man få en dator att följa pucken i bordshockey för att sammanställa match-statistik, en pensel att måla virtuella vattenfärger, en skalpell för att digitalisera patologi, eller ett multi-verktyg för att skulptera i 3D? Detta är fyra applikationer som bygger på den patentsökta algoritm som utvecklats i avhandlingen. Metoden bygger på att man gömmer en liten magnet i verktyget, och placerar ut ett antal tre-axliga magnetometrar - av samma slag som vi har i våra smarta telefoner - i ett nätverk kring vår arbetsyta. Magnetens magnetfält ger upphov till en unik signatur i sensorerna som gör att man kan beräkna magnetens position i tre frihetsgrader, samt två av dess vinklar. Avhandlingen tar fram ett komplett ramverk för dessa beräkningar och tillhörande analys. En annan tillämpning som studerats baserat på denna princip är detektion och klassificering av fordon. I ett samarbete med Luleå tekniska högskola med projektpartners har en algoritm tagits fram för att klassificera i vilken riktning fordonen passerar enbart med hjälp av mätningar från en två-axlig magnetometer. Tester utanför Luleå visar på i princip 100% korrekt klassificering. Att se ett fordon som en struktur av magnetiska dipoler i stället för en enda stor, är ett exempel på ett så kallat utsträckt mål. I klassisk teori för att följa flygplan, båtar mm, beskrivs målen som en punkt, men många av dagens allt noggrannare sensorer genererar flera mätningar från samma mål. Genom att ge målen en geometrisk utsträckning eller andra attribut (som dipols-strukturer) kan man inte enbart förbättra målföljnings-algoritmerna och använda sensordata effektivare, utan också klassificera målen effektivare. I avhandlingen föreslås en modell som beskriver den geometriska formen på ett mer flexibelt sätt och med en högre detaljnivå än tidigare modeller i litteraturen. En helt annan tillämpning som studerats är att använda maskininlärning för att lära en dator att styra en plan pendel till önskad position enbart genom att analysera pixlarna i video-bilder. Metodiken går ut på att låta datorn få studera mängder av bilder på en pendel, i det här fallet 1000-tals, för att förstå dynamiken av hur en känd styrsignal påverkar pendeln, för att sedan kunna agera autonomt när inlärningsfasen är klar. Tekniken skulle i förlängningen kunna användas för att utveckla autonoma robotar. / <p>In the electronic version figure 2.2a is corrected.</p> / COOPLOC
|
57 |
Σολιτονικές λύσεις της εξίσωσης Sine-Gordon : από το συνεχές στο διακριτό σύστημαΣταμούλη, Βασιλική 05 February 2015 (has links)
Η διακριτοποίηση των μερικών διαφορικών εξισώσεων (ΜΔΕ) αποτελεί κεντρικό βήμα στην αριθμητική τους επίλυση, και ως εκ τούτου είναι ένα από τα βασικά θέματα στα σύγχρονα μαθηματικά. Η μετάβαση από τη συνεχή ΜΔΕ στο αντίστοιχο διακριτό σύστημα μπορεί να γίνει με διάφορες αριθμητικές μεθόδους, ωστόσο δεν είναι όλες οι μέθοδοι εξίσου κατάλληλες και οφείλουμε πάντα να αναζητήσουμε την αρμόζουσα διακριτοποίηση για το εκάστοτε πρόβλημα. Στο 1ο κεφάλαιο γίνεται φανερό, μέσω του απλού παραδείγματος της λογιστικής εξίσωσης, πως μια αφελής διακριτοποίηση δύναται να αλλάξει δραματικά τη φύση του προβλήματος και των λύσεών του. Ιδιαίτερη προσοχή απαιτεί η διατήρηση (πριν και μετά τη διακριτοποίηση) των συμμετριών και των αναλλοίωτων μεγεθών του προβλήματος.
Στην παρούσα διπλωματική εργασία μελετάμε την περίπτωση της εξίσωσης sine-Gordon, εστιάζοντας στις σολιτονικές της λύσεις. Στο 2ο κεφάλαιο παρουσιάζεται αναλυτικά η εξίσωση αυτή.
Στο 3ο κεφάλαιο μέσω δύο διαφορετικών μεθόδων διακριτοποίησης, δείχνουμε τί ακριβώς πρέπει να προσέξει κανείς έτσι ώστε να δέχεται και το διακριτό σύστημα σολιτονικές λύσεις. Ως γνωστόν οι σολιτονικές λύσεις οφείλουν να πληρούν την ιδιότητα να παραμένουν αναλλοίωτες, διατηρώντας την ταχύτητα και το πλάτος τους πριν και μετά την αλληλεπίδρασή τους.
Στο 4ο κεφάλαιο παρουσιάζονται συνοπτικά τα συμπεράσματα της παρούσας εργασίας ενώ συγκρίνουμε και τις δύο μεθόδους αριθμητικής επίλυσης που αναφέραμε. / The discretization of partial differential equations (PDEs) is a key step in their numerical solution, and therefore is one of the main issues in modern mathematics. The transition from continuous PDEs to their discrete counterparts can be done by various numerical methods, though not all methods are equally suitable; for this reason one should be careful to use an appropriate discretization method for each specific problem.
In the first chapter it becomes clear, through the simple example of the logistic equation, that a naive discretization may dramatically change the nature of the problem and its solutions. Particular attention needs to be paid to the preservation (before and after the discretization) of the symmetries and invariant quantities of the problem.
In the present work we study the case of the famous sine-Gordon equation, focusing on its soliton solutions. The second chapter presents a step-by-step derivation of the aforementioned equation. In the third chapter we show, by means of two different discretization schemes, which conditions must be met in order to guarantee that also the discrete system will admit soliton solutions. As is well known, soliton solutions are required to remain unchanged when they interact with each other, maintaining their speed and amplitude before and after the interaction.
In the fourth chapter we summarize the conclusions of this work and draw a comparison between the two numerical schemes we have studied.
|
58 |
Adaptive modeling of plate structuresBohinc, Uroš 05 May 2011 (has links) (PDF)
The primary goal of the thesis is to provide some answers to the questions related to the key steps in the process of adaptive modeling of plates. Since the adaptivity depends on reliable error estimates, a large part of the thesis is related to the derivation of computational procedures for discretization error estimates as well as model error estimates. A practical comparison of some of the established discretization error estimates is made. Special attention is paid to what is called equilibrated residuum method, which has a potential to be used both for discretization error and model error estimates. It should be emphasized that the model error estimates are quite hard to obtain, in contrast to the discretization error estimates. The concept of model adaptivity for plates is in this work implemented on the basis of equilibrated residuum method and hierarchic family of plate finite element models.The finite elements used in the thesis range from thin plate finite elements to thick plate finite elements. The latter are based on a newly derived higher order plate theory, which includes through the thickness stretching. The model error is estimated by local element-wise computations. As all the finite elements, representing the chosen plate mathematical models, are re-derived in order to share the same interpolation bases, the difference between the local computations can be attributed mainly to the model error. This choice of finite elements enables effective computation of the model error estimate and improves the robustness of the adaptive modeling. Thus the discretization error can be computed by an independent procedure.Many numerical examples are provided as an illustration of performance of the derived plate elements, the derived discretization error procedures and the derived modeling error procedure. Since the basic goal of modeling in engineering is to produce an effective model, which will produce the most accurate results with the minimum input data, the need for the adaptive modeling will always be present. In this view, the present work is a contribution to the final goal of the finite element modeling of plate structures: a fully automatic adaptive procedure for the construction of an optimal computational model (an optimal finite element mesh and an optimal choice of a plate model for each element of the mesh) for a given plate structure.
|
59 |
Classification models for high-dimensional data with sparsity patternsTillander, Annika January 2013 (has links)
Today's high-throughput data collection devices, e.g. spectrometers and gene chips, create information in abundance. However, this poses serious statistical challenges, as the number of features is usually much larger than the number of observed units. Further, in this high-dimensional setting, only a small fraction of the features are likely to be informative for any specific project. In this thesis, three different approaches to the two-class supervised classification in this high-dimensional, low sample setting are considered. There are classifiers that are known to mitigate the issues of high-dimensionality, e.g. distance-based classifiers such as Naive Bayes. However, these classifiers are often computationally intensive and therefore less time-consuming for discrete data. Hence, continuous features are often transformed into discrete features. In the first paper, a discretization algorithm suitable for high-dimensional data is suggested and compared with other discretization approaches. Further, the effect of discretization on misclassification probability in high-dimensional setting is evaluated. Linear classifiers are more stable which motivate adjusting the linear discriminant procedure to high-dimensional setting. In the second paper, a two-stage estimation procedure of the inverse covariance matrix, applying Lasso-based regularization and Cuthill-McKee ordering is suggested. The estimation gives a block-diagonal approximation of the covariance matrix which in turn leads to an additive classifier. In the third paper, an asymptotic framework that represents sparse and weak block models is derived and a technique for block-wise feature selection is proposed. Probabilistic classifiers have the advantage of providing the probability of membership in each class for new observations rather than simply assigning to a class. In the fourth paper, a method is developed for constructing a Bayesian predictive classifier. Given the block-diagonal covariance matrix, the resulting Bayesian predictive and marginal classifier provides an efficient solution to the high-dimensional problem by splitting it into smaller tractable problems. The relevance and benefits of the proposed methods are illustrated using both simulated and real data. / Med dagens teknik, till exempel spektrometer och genchips, alstras data i stora mängder. Detta överflöd av data är inte bara till fördel utan orsakar även vissa problem, vanligtvis är antalet variabler (p) betydligt fler än antalet observation (n). Detta ger så kallat högdimensionella data vilket kräver nya statistiska metoder, då de traditionella metoderna är utvecklade för den omvända situationen (p<n). Dessutom är det vanligtvis väldigt få av alla dessa variabler som är relevanta för något givet projekt och styrkan på informationen hos de relevanta variablerna är ofta svag. Därav brukar denna typ av data benämnas som gles och svag (sparse and weak). Vanligtvis brukar identifiering av de relevanta variablerna liknas vid att hitta en nål i en höstack. Denna avhandling tar upp tre olika sätt att klassificera i denna typ av högdimensionella data. Där klassificera innebär, att genom ha tillgång till ett dataset med både förklaringsvariabler och en utfallsvariabel, lära en funktion eller algoritm hur den skall kunna förutspå utfallsvariabeln baserat på endast förklaringsvariablerna. Den typ av riktiga data som används i avhandlingen är microarrays, det är cellprov som visar aktivitet hos generna i cellen. Målet med klassificeringen är att med hjälp av variationen i aktivitet hos de tusentals gener (förklaringsvariablerna) avgöra huruvida cellprovet kommer från cancervävnad eller normalvävnad (utfallsvariabeln). Det finns klassificeringsmetoder som kan hantera högdimensionella data men dessa är ofta beräkningsintensiva, därav fungera de ofta bättre för diskreta data. Genom att transformera kontinuerliga variabler till diskreta (diskretisera) kan beräkningstiden reduceras och göra klassificeringen mer effektiv. I avhandlingen studeras huruvida av diskretisering påverkar klassificeringens prediceringsnoggrannhet och en mycket effektiv diskretiseringsmetod för högdimensionella data föreslås. Linjära klassificeringsmetoder har fördelen att vara stabila. Nackdelen är att de kräver en inverterbar kovariansmatris och vilket kovariansmatrisen inte är för högdimensionella data. I avhandlingen föreslås ett sätt att skatta inversen för glesa kovariansmatriser med blockdiagonalmatris. Denna matris har dessutom fördelen att det leder till additiv klassificering vilket möjliggör att välja hela block av relevanta variabler. I avhandlingen presenteras även en metod för att identifiera och välja ut blocken. Det finns också probabilistiska klassificeringsmetoder som har fördelen att ge sannolikheten att tillhöra vardera av de möjliga utfallen för en observation, inte som de flesta andra klassificeringsmetoder som bara predicerar utfallet. I avhandlingen förslås en sådan Bayesiansk metod, givet den blockdiagonala matrisen och normalfördelade utfallsklasser. De i avhandlingen förslagna metodernas relevans och fördelar är visade genom att tillämpa dem på simulerade och riktiga högdimensionella data.
|
60 |
Investigation of Effervescent Atomization Using Laser-Based Measurement TechniquesGhaemi, Sina Unknown Date
No description available.
|
Page generated in 0.1297 seconds