121 |
Anwendung des Mikrogravitationslinseneffekts zur Untersuchung astronomischer ObjekteHelms, Andreas January 2004 (has links)
Die Untersuchung mikrogelinster astronomischer Objekte ermöglicht es, Informationen über die Größe und Struktur dieser Objekte zu erhalten.
Im ersten Teil dieser Arbeit werden die Spektren von drei gelinsten Quasare, die mit dem Potsdamer Multi Aperture Spectrophotometer (PMAS) erhalten wurden, auf Anzeichen für Mikrolensing untersucht. In den Spektren des Vierfachquasares HE 0435-1223 und des Doppelquasares HE 0047-1756 konnten Hinweise für Mikrolensing gefunden werden, während der Doppelquasar UM 673 (Q 0142--100) keine Anzeichen für Mikrolensing zeigt.
Die Invertierung der Lichtkurve eines Mikrolensing-Kausik-Crossing-Ereignisses ermöglicht es, das eindimensionale Helligkeitsprofil der gelinsten Quelle zu rekonstruieren. Dies wird im zweiten Teil dieser Arbeit untersucht.
Die mathematische Beschreibung dieser Aufgabe führt zu einer Volterra'schen Integralgleichung der ersten Art, deren Lösung ein schlecht gestelltes Problem ist. Zu ihrer Lösung wird in dieser Arbeit ein lokales Regularisierungsverfahren angewendet, das an die kausale Strukture der Volterra'schen Gleichung besser angepasst ist als die bisher verwendete Tikhonov-Phillips-Regularisierung.
Es zeigt sich, dass mit dieser Methode eine bessere Rekonstruktion kleinerer Strukturen in der Quelle möglich ist. Weiterhin wird die Anwendbarkeit der Regularisierungsmethode auf realistische Lichtkurven mit irregulärem Sampling bzw. größeren Lücken in den Datenpunkten untersucht. / The study of microlensed astronomical objects can reveal information about the size and the structure of these objects.
In the first part of this thesis we analyze the spectra of three lensed quasars obtained with the Potsdam Multi Aperture Spectrophotometer (PMAS). The spectra of the quadrupole quasar HE 0435--1223 and the double quasar HE 0047--1756 show evidence for microlensing whereas in the double quasar UM 673 (Q 0142--100) no evidence for microlensing could be found.
By inverting the lightcurve of a microlensing caustic crossing event the one dimensional luminosity profile of the lensed source can be reconstructed. This is investigated in the second part of this thesis.The mathematical formulation of this problem leads to a Volterra integral equation of the first kind, whose solution is an ill-posed problem. For the solution we use a local regularization method which is better adapted to the causal structure of the Volterra integral equation compared to the so far used Tikhonov-Phillips regularization. Furthermore we show that this method is more robust on reconstructing small structures in the source profile. We also study the influence of irregular sampled data and gaps in the lightcurve on the result of the inversion.
|
122 |
Simultaneous activity and attenuation reconstruction in emission tomographyDicken, Volker January 1998 (has links)
In single photon emission computed tomography (SPECT) one is interested in reconstructing the activity distribution f of some radiopharmaceutical. The data gathered suffer from attenuation due to the tissue density µ. Each imaged slice incorporates noisy sample values of the nonlinear attenuated Radon transform
(formular at this place in the original abstract)
Traditional theory for SPECT reconstruction treats µ as a known parameter. In practical applications, however, µ is not known, but either crudely estimated, determined in costly additional measurements or plainly neglected. We demonstrate that an approximation of both f and µ from SPECT data alone is feasible, leading to quantitatively more accurate SPECT images. The result is based on nonlinear Tikhonov regularization techniques for parameter estimation problems in differential equations combined with Gauss-Newton-CG minimization.
|
123 |
Algorithms for a Partially Regularized Least Squares ProblemSkoglund, Ingegerd January 2007 (has links)
Vid analys av vattenprover tagna från t.ex. ett vattendrag betäms halten av olika ämnen. Dessa halter är ofta beroende av vattenföringen. Det är av intresse att ta reda på om observerade förändringar i halterna beror på naturliga variationer eller är orsakade av andra faktorer. För att undersöka detta har föreslagits en statistisk tidsseriemodell som innehåller okända parametrar. Modellen anpassas till uppmätta data vilket leder till ett underbestämt ekvationssystem. I avhandlingen studeras bl.a. olika sätt att säkerställa en unik och rimlig lösning. Grundidén är att införa vissa tilläggsvillkor på de sökta parametrarna. I den studerade modellen kan man t.ex. kräva att vissa parametrar inte varierar kraftigt med tiden men tillåter årstidsvariationer. Det görs genom att dessa parametrar i modellen regulariseras. Detta ger upphov till ett minsta kvadratproblem med en eller två regulariseringsparametrar. I och med att inte alla ingående parametrar regulariseras får vi dessutom ett partiellt regulariserat minsta kvadratproblem. I allmänhet känner man inte värden på regulariseringsparametrarna utan problemet kan behöva lösas med flera olika värden på dessa för att få en rimlig lösning. I avhandlingen studeras hur detta problem kan lösas numeriskt med i huvudsak två olika metoder, en iterativ och en direkt metod. Dessutom studeras några sätt att bestämma lämpliga värden på regulariseringsparametrarna. I en iterativ lösningsmetod förbättras stegvis en given begynnelseapproximation tills ett lämpligt valt stoppkriterium blir uppfyllt. Vi använder här konjugerade gradientmetoden med speciellt konstruerade prekonditionerare. Antalet iterationer som krävs för att lösa problemet utan prekonditionering och med prekonditionering jämförs både teoretiskt och praktiskt. Metoden undersöks här endast med samma värde på de två regulariseringsparametrarna. I den direkta metoden används QR-faktorisering för att lösa minsta kvadratproblemet. Idén är att först utföra de beräkningar som kan göras oberoende av regulariseringsparametrarna samtidigt som hänsyn tas till problemets speciella struktur. För att bestämma värden på regulariseringsparametrarna generaliseras Reinsch’s etod till fallet med två parametrar. Även generaliserad korsvalidering och en mindre beräkningstung Monte Carlo-metod undersöks. / Statistical analysis of data from rivers deals with time series which are dependent, e.g., on climatic and seasonal factors. For example, it is a well-known fact that the load of substances in rivers can be strongly dependent on the runoff. It is of interest to find out whether observed changes in riverine loads are due only to natural variation or caused by other factors. Semi-parametric models have been proposed for estimation of time-varying linear relationships between runoff and riverine loads of substances. The aim of this work is to study some numerical methods for solving the linear least squares problem which arises. The model gives a linear system of the form A1x1 + A2x2 + n = b1. The vector n consists of identically distributed random variables all with mean zero. The unknowns, x, are split into two groups, x1 and x2. In this model, usually there are more unknowns than observations and the resulting linear system is most often consistent having an infinite number of solutions. Hence some constraint on the parameter vector x is needed. One possibility is to avoid rapid variation in, e.g., the parameters x2. This can be accomplished by regularizing using a matrix A3, which is a discretization of some norm. The problem is formulated as a partially regularized least squares problem with one or two regularization parameters. The parameter x2 has here a two-dimensional structure. By using two different regularization parameters it is possible to regularize separately in each dimension. We first study (for the case of one parameter only) the conjugate gradient method for solution of the problem. To improve rate of convergence blockpreconditioners of Schur complement type are suggested, analyzed and tested. Also a direct solution method based on QR decomposition is studied. The idea is to first perform operations independent of the values of the regularization parameters. Here we utilize the special block-structure of the problem. We further discuss the choice of regularization parameters and generalize in particular Reinsch’s method to the case with two parameters. Finally the cross-validation technique is treated. Here also a Monte Carlo method is used by which an approximation to the generalized cross-validation function can be computed efficiently.
|
124 |
Combining analytical and iterative reconstruction in helical cone-beam CTSunnegårdh, Johan January 2007 (has links)
Contemporary algorithms employed for reconstruction of 3D volumes from helical cone beam projections are so called non-exact algorithms. This means that the reconstructed volumes contain artifacts irrespective of the detector resolution and number of projection angles employed in the process. In this thesis, three iterative schemes for suppression of these so called cone artifacts are investigated. The first scheme, iterative weighted filtered backprojection (IWFBP), is based on iterative application of a non-exact algorithm. For this method, artifact reduction, as well as spatial resolution and noise properties are measured. During the first five iterations, cone artifacts are clearly reduced. As a side effect, spatial resolution and noise are increased. To avoid this side effect and improve the convergence properties, a regularization procedure is proposed and evaluated. In order to reduce the cost of the IWBP scheme, a second scheme is created by combining IWFBP with the so called ordered subsets technique, which we call OSIWFBP. This method divides the projection data set into subsets, and operates sequentially on each of these in a certain order, hence the name “ordered subsets”. We investigate two different ordering schemes and number of subsets, as well as the possibility to accelerate cone artifact suppression. The main conclusion is that the ordered subsets technique indeed reduces the number of iterations needed, but that it suffers from the drawback of noise amplification. The third scheme starts by dividing input data into high- and low-frequency data, followed by non-iterative reconstruction of the high-frequency part and IWFBP reconstruction of the low-frequency part. This could open for acceleration by reduction of data in the iterative part. The results show that a suppression of artifacts similar to that of the IWFBP method can be obtained, even if a significant part of high-frequency data is non-iteratively reconstructed.
|
125 |
Regularized Calibration of Jump-Diffusion Option Pricing ModelsNassar, Hiba January 2010 (has links)
An important issue in finance is model calibration. The calibration problem is the inverse of the option pricing problem. Calibration is performed on a set of option prices generated from a given exponential L´evy model. By numerical examples, it is shown that the usual formulation of the inverse problem via Non-linear Least Squares is an ill-posed problem. To achieve well-posedness of the problem, some regularization is needed. Therefore a regularization method based on relative entropy is applied.
|
126 |
Single-Zone Cylinder Pressure Modeling and Estimation for Heat Release Analysis of SI EnginesKlein, Markus January 2007 (has links)
Cylinder pressure modeling and heat release analysis are today important and standard tools for engineers and researchers, when developing and tuning new engines. Being able to accurately model and extract information from the cylinder pressure is important for the interpretation and validity of the result. The first part of the thesis treats single-zone cylinder pressure modeling, where the specific heat ratio model constitutes a key part. This model component is therefore investigated more thoroughly. For the purpose of reference, the specific heat ratio is calculated for burned and unburned gases, assuming that the unburned mixture is frozen and that the burned mixture is at chemical equilibrium. Use of the reference model in heat release analysis is too time consuming and therefore a set of simpler models, both existing and newly developed, are compared to the reference model. A two-zone mean temperature model and the Vibe function are used to parameterize the mass fraction burned. The mass fraction burned is used to interpolate the specific heats for the unburned and burned mixture, and to form the specific heat ratio, which renders a cylinder pressure modeling error in the same order as the measurement noise, and fifteen times smaller than the model originally suggested in Gatowski et al. (1984). The computational time is increased with 40 % compared to the original setting, but reduced by a factor 70 compared to precomputed tables from the full equilibrium program. The specific heats for the unburned mixture are captured within 0.2 % by linear functions, and the specific heats for the burned mixture are captured within 1 % by higher-order polynomials for the major operating range of a spark ignited (SI) engine. In the second part, four methods for compression ratio estimation based on cylinder pressure traces are developed and evaluated for both simulated and experimental cycles. Three methods rely upon a model of polytropic compression for the cylinder pressure. It is shown that they give a good estimate of the compression ratio at low compression ratios, although the estimates are biased. A method based on a variable projection algorithm with a logarithmic norm of the cylinder pressure yields the smallest confidence intervals and shortest computational time for these three methods. This method is recommended when computational time is an important issue. The polytropic pressure model lacks information about heat transfer and therefore the estimation bias increases with the compression ratio. The fourth method includes heat transfer, crevice effects, and a commonly used heat release model for firing cycles. This method estimates the compression ratio more accurately in terms of bias and variance. The method is more computationally demanding and thus recommended when estimation accuracy is the most important property. In order to estimate the compression ratio as accurately as possible, motored cycles with as high initial pressure as possible should be used. The objective in part 3 is to develop an estimation tool for heat release analysis that is accurate, systematic and efficient. Two methods that incorporate prior knowledge of the parameter nominal value and uncertainty in a systematic manner are presented and evaluated. Method 1 is based on using a singular value decomposition of the estimated hessian, to reduce the number of estimated parameters one-by-one. Then the suggested number of parameters to use is found as the one minimizing the Akaike final prediction error. Method 2 uses a regularization technique to include the prior knowledge in the criterion function. Method 2 gives more accurate estimates than method 1. For method 2, prior knowledge with individually set parameter uncertainties yields more accurate and robust estimates. Once a choice of parameter uncertainty has been done, no user interaction is needed. Method 2 is then formulated for three different versions, which differ in how they determine how strong the regularization should be. The quickest version is based on ad-hoc tuning and should be used when computational time is important. Another version is more accurate and flexible to changing operating conditions, but is more computationally demanding.
|
127 |
Regularization as a tool for managing irregular immigration : An evaluation of the regularization of irregular immigrants in Spain through the labour marketAlonso Hjärtström, Livia January 2008 (has links)
The objective of the thesis is to make a stakeholder evaluation of the regularization process that in 2005 gave the right to irregular immigrants in Spain to apply for a legal status. I want to portray how different groups at the labour market experienced the process and identify the factors that contributed to the result. I further want to study if regularization can be seen as an effectual measurement for managing irregular immigration. The methods are qualitative interviews and text analysis combined with evaluation method. The main theories are Venturini’s and Levinson’s suggestions for a successful regularization. Other prominent theories are Soysal’s theory about citizenship, Jordan’s and Düvell’s and Castles theories about irregular immigration. The result shows that the main argument for carrying out the process was to improve the situation at the labour market. The most prominent factors that affected the outcome were the social consensus preceding the process and the prerequisite of having a job contract. The regularization of irregular immigrants had an overall positive outcome but the stringent prerequisites for being regularized together with problems with sanctions of employers probably had a somewhat negative outcome on the result of the regularization.<br />
|
128 |
Identification of switched linear regression models using sum-of-norms regularizationOhlsson, Henrik, Ljung, Lennart January 2013 (has links)
This paper proposes a general convex framework for the identification of switched linear systems. The proposed framework uses over-parameterization to avoid solving the otherwise combinatorially forbidding identification problem, and takes the form of a least-squares problem with a sum-of-norms regularization, a generalization of the ℓ1-regularization. The regularization constant regulates the complexity and is used to trade off the fit and the number of submodels. / <p>Funding Agencies|Swedish foundation for strategic research in the center MOVIII||Swedish Research Council in the Linnaeus center CADICS||European Research Council|267381|Sweden-America Foundation||Swedish Science Foundation||</p>
|
129 |
Regularization of Parameter Problems for Dynamic Beam ModelsRydström, Sara January 2010 (has links)
The field of inverse problems is an area in applied mathematics that is of great importance in several scientific and industrial applications. Since an inverse problem is typically founded on non-linear and ill-posed models it is a very difficult problem to solve. To find a regularized solution it is crucial to have a priori information about the solution. Therefore, general theories are not sufficient considering new applications. In this thesis we consider the inverse problem to determine the beam bending stiffness from measurements of the transverse dynamic displacement. Of special interest is to localize parts with reduced bending stiffness. Driven by requirements in the wood-industry it is not enough considering time-efficient algorithms, the models must also be adapted to manage extremely short calculation times. For the developing of efficient methods inverse problems based on the fourth order Euler-Bernoulli beam equation and the second order string equation are studied. Important results are the transformation of a nonlinear regularization problem to a linear one and a convex procedure for finding parts with reduced bending stiffness.
|
130 |
Two-dimensional constrained anisotropic inversion of magnetotelluric dataChen, Xiaoming January 2012 (has links)
Tectonic and geological processes on Earth often result in structural anisotropy of the subsurface, which can be imaged by various geophysical methods. In order to achieve appropriate and realistic Earth models for interpretation, inversion algorithms have to allow for an anisotropic subsurface. Within the framework of this thesis, I analyzed a magnetotelluric (MT) data set taken from the Cape Fold Belt in South Africa. This data set exhibited strong indications for crustal anisotropy, e.g. MT phases out of the expected quadrant, which are beyond of fitting and interpreting with standard isotropic inversion algorithms. To overcome this obstacle, I have developed a two-dimensional inversion method for reconstructing anisotropic electrical conductivity distributions.
The MT inverse problem represents in general a non-linear and ill-posed minimization problem with many degrees of freedom: In isotropic case, we have to assign an electrical conductivity value to each cell of a large grid to assimilate the Earth's subsurface, e.g. a grid with 100 x 50 cells results in 5000 unknown model parameters in an isotropic case; in contrast, we have the sixfold in an anisotropic scenario where the single value of electrical conductivity becomes a symmetric, real-valued tensor while the number of the data remains unchanged. In order to successfully invert for anisotropic conductivities and to overcome the non-uniqueness of the solution of the inverse problem it is necessary to use appropriate constraints on the class of allowed models. This becomes even more important as MT data is
not equally sensitive to all anisotropic parameters. In this thesis, I have developed an algorithm through which the solution of the anisotropic inversion problem is calculated by minimization of a global penalty functional consisting of three entries: the data misfit, the model roughness constraint and the anisotropy constraint. For comparison, in an isotropic approach only the first two entries are minimized. The newly defined anisotropy term is measured by the sum of the square difference of the principal conductivity values of the model. The basic idea of this constraint is straightforward. If an isotropic model is already adequate to explain the data, there is no need to introduce electrical anisotropy at all.
In order to ensure successful inversion, appropriate trade-off parameters, also known as regularization parameters, have to be chosen for the different model constraints. Synthetic tests show that using fixed trade-off parameters usually causes the inversion to end up by either a smooth model with large RMS error or a rough model with small RMS error. Using of a relaxation approach on the regularization parameters after each successful inversion iteration will result in smoother inversion model and a better convergence. This approach seems to be a sophisticated way for the selection of trade-off parameters. In general, the proposed inversion method is adequate for resolving the principal conductivities defined in horizontal plane. Once none of the principal directions of the anisotropic structure is coincided with the predefined strike direction, only the corresponding effective conductivities, which is the projection of the principal conductivities onto the model coordinate axes direction, can be resolved and the information about the rotation angles is
lost.
In the end the MT data from the Cape Fold Belt in South Africa has been analyzed. The MT data exhibits an area (> 10 km) where MT phases over 90 degrees occur. This part of data cannot be modeled by standard isotropic modeling procedures and hence can not be properly interpreted. The proposed inversion method, however, could not reproduce the anomalous large phases as desired because of losing the information about rotation angles. MT phases outside the first quadrant are usually obtained by different anisotropic anomalies with oblique anisotropy strike. In order to achieve this challenge, the algorithm needs further developments. However, forward modeling studies with the MT data have shown that surface highly conductive heterogeneity in combination with a mid-crustal electrically anisotropic zone are required to fit the data. According to known geological and tectonic information the mid-crustal zone is interpreted as a deep aquifer related to the fractured Table Mountain Group rocks in the Cape Fold Belt. / Tektonische und geologische Prozesse verursachen häufig eine strukturelle Anisotropie des Untergrundes, welche von verschiedenen geophysikalischen Methoden beobachtet werden kann. Zur Erstellung und Interpretation geeigneter, realistischer Modelle der Erde sind Inversionsalgorithmen notwendig, die einen anisotropen Untergrund einbeziehen können. Für die vorliegende Arbeit habe ich einen magnetotellurischen (MT) Datensatz vom Cape Fold Gürtel in Südafrika untersucht. Diese Daten weisen auf eine ausgeprägte Anisotropie der Kruste hin, da z.B. die MT Phasen außerhalb des erwarteten Quadranten liegen und nicht durch standardisierte isotrope Inversionsalgorithmen angepasst und ausgewertet werden können. Um dieses Problem zu beheben, habe ich eine zweidimensionale Inversionsmethode entwickelt, welche eine anisotrope elektrische Leitfähigkeitsverteilungen in den Modellen zulässt.
Die MT Inversion ist im allgemeinen ein nichtlineares, schlecht gestelltes Minimierungsproblem mit einer hohen Anzahl an Freiheitsgraden. Im isotropen Fall wird jeder Gitterzelle eines Modells ein elektrischer Leitfähigkeitswert zugewiesen um den Erduntergrund nachzubilden. Ein Modell mit beispielsweise 100 x 50 Zellen besitzt 5000 unbekannte Modellparameter. Im Gegensatz dazu haben wir im anisotropen Fall die sechsfache Anzahl, da hier aus dem einfachen Zahlenwert der elektrischen Leitfähigkeit ein symmetrischer, reellwertiger Tensor wird, wobei die Anzahl der Daten gleich bleibt. Für die erfolgreiche Inversion von anisotropen Leitfähigkeiten und um die Nicht-Eindeutigkeit der Lösung des inversen Problems zu überwinden, ist eine geeignete Einschränkung der möglichen Modelle absolut notwendig. Dies wird umso wichtiger, da die Sensitivität von MT Daten nicht für alle Anisotropieparameter gleich ist. In der vorliegenden Arbeit habe ich einen Algorithmus entwickelt, welcher die Lösung des anisotropen Inversionsproblems unter Minimierung einer globalen Straffunktion berechnet. Diese besteht aus drei Teilen: der Datenanpassung, den Zusatzbedingungen an die Glätte des Modells und die Anisotropie. Im Gegensatz dazu werden beim isotropen Fall nur die ersten zwei Parameter minimiert. Der neu definierte Anisotropieterm wird mit Hilfe der Summe der quadratischen Abweichung der Hauptleitfähigkeitswerte des Modells gemessen. Die grundlegende Idee dieser Zusatzbedingung ist einfach. Falls ein isotropes Modell die Daten ausreichend gut anpassen kann, wird keine elektrische Anisotropie zusätzlich in das Modell eingefügt.
Um eine erfolgreiche Inversion zu garantieren müssen geeignete Regularisierungsparameter für die verschiedenen Nebenbedingungen an das Modell gewählt werden. Tests mit synthetischen Modellen zeigen, dass bei festgesetzten Regularisierungsparametern die Inversion meistens entweder in einem glatten Modell mit hohem RMS Fehler oder einem groben Modell mit kleinem RMS Fehler endet. Die Anwendung einer Relaxationsbedingung auf die Regularisierung nach jedem Iterationsschritt resultiert in glatteren Inversionsmodellen und einer höheren Konvergenz und scheint ein ausgereifter Weg zur Wahl der Parameter zu sein. Die vorgestellte Inversionsmethode ist im allgemeinen in der Lage die Hauptleitfähigkeiten in der horizontalen Ebene zu finden. Wenn keine der Hauptrichtungen der Anisotropiestruktur mit der vorgegebenen Streichrichtung übereinstimmt, können nur die dazugehörigen effektiven Leitfähigkeiten, welche die Projektion der Hauptleitfähigkeiten auf die Koordinatenachsen des Modells darstellen, aufgelöst werden. Allerdings gehen die Informationen über die Rotationswinkel verloren.
Am Ende meiner Arbeit werden die MT Daten des Cape Fold Gürtels in Südafrika analysiert. Die MT Daten zeigen in einem Abschnitt des Messprofils (> 10 km) Phasen über 90 Grad. Dieser Teil der Daten kann nicht mit herkömmlichen isotropen Modellierungsverfahren angepasst und daher mit diesen auch nicht vollständig ausgewertet werden. Die vorgestellte Inversionsmethode konnte die außergewöhnlich hohen Phasenwerte nicht wie gewünscht im Inversionsergebnis erreichen, was mit dem erwähnten Informationsverlust der Rotationswinkel begründet werden kann. MT Phasen außerhalb des ersten Quadranten können für gewöhnlich bei Anomalien mit geneigter Streichrichtung der Anisotropie gemessen werden. Um diese auch in den Inversionsergebnissen zu erreichen ist eine Weiterentwicklung des Algorithmus notwendig. Vorwärtsmodellierungen des MT Datensatzes haben allerdings gezeigt, dass eine hohe Leitfähigkeitsheterogenität an der Oberfläche in Kombination mit einer Zone elektrischer Anisotropie in der mittleren Kruste notwendig sind um die Daten anzupassen. Aufgrund geologischer und tektonischer Informationen kann diese Zone in der mittleren Kruste als tiefer Aquifer interpretiert werden, der im Zusammenhang mit den zerrütteten Gesteinen der Table Mountain Group des Cape Fold Gürtels steht.
|
Page generated in 0.0325 seconds