• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 163
  • 32
  • 32
  • 22
  • 4
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 314
  • 61
  • 42
  • 39
  • 36
  • 34
  • 31
  • 29
  • 26
  • 24
  • 24
  • 24
  • 23
  • 22
  • 21
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
271

Improved Robust Stability Bounds for Sampled Data Systems with Time Delayed Feedback Control

Kurudamannil, Jubal J. 15 May 2015 (has links)
No description available.
272

On the effect of asymmetry and dimension on computational geometric problems

Sridhar, Vijay, Sridhar 07 November 2018 (has links)
No description available.
273

Currency Rollercoaster : Trade With Exchange Rate Volatility

Andersson, Felicia, Knobe Fredin, Oscar January 2024 (has links)
This essay examines the relationship between exchange rate volatility, estimated using a GARCH model, and level of trade for Sweden and Finland. The data used was collected from Refinitive Eikon Datastream with monthly observations for the time period January 2005 - December 2022. The obtained results indicate that the volatility of the Swedish Krona and Euro positively increases the level of trade for Sweden respectively Finland according to the ARDL model. However, while examining different time perspectives the conclusions resulted in inconclusiveness for the countries and perspectives. The ARDL bounds test for Sweden corresponded with inconclusive results regarding a possible positive long term relationship between SEKs exchange rate volatility and level of trade. Furthermore, the Granger causality test did not state a short term relationship between the two variables for Sweden nor did it state a reversed relationship. On the other hand, for Finland, the ARDL bounds test and Granger causality test denied both a long term and short term positive relationship between the EURs exchange rate volatility and level of trade for Finland. However, for Finland a reversed Granger causality test was shown indicating that the level of trade has an impact on the volatility of the EURs exchange rate.
274

Robust aspects of hedging and valuation in incomplete markets and related backward SDE theory

Tonleu, Klebert Kentia 16 March 2016 (has links)
Diese Arbeit beginnt mit einer Analyse von stochastischen Rückwärtsdifferentialgleichungen (BSDEs) mit Sprüngen, getragen von zufälligen Maßen mit ggf. unendlicher Aktivität und zeitlich inhomogenem Kompensator. Unter konkreten, in Anwendungen leicht verifizierbaren Bedingungen liefern wir Existenz-, Eindeutigkeits- und Vergleichsergebnisse beschränkter Lösungen für eine Klasse von Generatorfunktionen, die nicht global Lipschitz-stetig im Sprungintegranden sein brauchen. Der übrige Teil der Arbeit behandelt robuste Bewertung und Hedging in unvollständigen Märkten. Wir verfolgen den No-Good-Deal-Ansatz, der Good-Deal-Grenzen liefert, indem nur eine Teilmenge der risikoneutralen Maße mit ökonomischer Bedeutung betrachtet wird (z.B. Grenzen für instantanen Sharpe-Ratio, optimale Wachstumsrate oder erwarteten Nutzen). Durchweg untersuchen wir ein Konzept des Good-Deal-Hedgings für welches Hedgingstrategien als Minimierer geeigneter dynamischer Risikomaße auftreten, was optimale Risikoteilung mit der Markt erlaubt. Wir zeigen, dass Hedging mindestens im-Mittel-selbstfinanzierend ist, also, dass Hedgefehler unter geeigneten A-priori-Bewertungsmaßen eine Supermartingaleigenschaft haben. Wir leiten konstruktive Ergebnisse zu Good-Deal-Bewertung und -Hedging im Rahmen von Prozessen mit Sprüngen durch BSDEs mit Sprüngen, sowie im Brown''schen Fall mit Driftunsicherheit durch klassische BSDEs und mit Volatilitätsunsicherheit durch BSDEs zweiter Ordnung her. Wir liefern neue Beispiele, die insbesondere für versicherungs- und finanzmathematische Anwendungen von Bedeutung sind. Bei Ungewissheit des Real-World-Maßes führt ein Worst-Case-Ansatz bei Annahme mehrerer Referenzmaße zu Good-Deal-Hedging, welches robust bzgl. Unsicherheit, im Sinne von gleichmäßig über alle Referenzmaße mindestens im-Mittel-selbstfinanzierend, ist. Daher ist bei hinreichend großer Driftunsicherheit Good-Deal-Hedging zur Risikominimierung äquivalent. / This thesis starts by an analysis of backward stochastic differential equations (BSDEs) with jumps driven by random measures possibly of infinite activity with time-inhomogeneous compensators. Under concrete conditions that are easy to verify in applications, we prove existence, uniqueness and comparison results for bounded solutions for a class of generators that are not required to be globally Lipschitz in the jump integrand. The rest of the thesis deals with robust valuation and hedging in incomplete markets. The focus is on the no-good-deal approach, which computes good-deal valuation bounds by using only a subset of the risk-neutral measures with economic meaning (e.g. bounds on instantaneous Sharpe ratios, optimal growth rates, or expected utilities). Throughout we study a notion of good-deal hedging consisting in minimizing some dynamic risk measures that allow for optimal risk sharing with the market. Hedging is shown to be at least mean-self-financing in that hedging errors satisfy a supermartingale property under suitable valuation measures. We derive constructive results on good-deal valuation and hedging in a jump framework using BSDEs with jumps, as well as in a Brownian setting with drift uncertainty using classical BSDEs and with volatility uncertainty using second-order BSDEs. We provide new examples which are particularly relevant for actuarial and financial applications. Under ambiguity about the real-world measure, a worst-case approach under multiple reference priors leads to good-deal hedging that is robust w.r.t. uncertainty in that it is at least mean-self-financing uniformly over all priors. This yields that good-deal hedging is equivalent to risk-minimization if drift uncertainty is sufficiently large.
275

On the numerical analysis of eigenvalue problems

Gedicke, Joscha Micha 05 November 2013 (has links)
Die vorliegende Arbeit zum Thema der numerischen Analysis von Eigenwertproblemen befasst sich mit fünf wesentlichen Aspekten der numerischen Analysis von Eigenwertproblemen. Der erste Teil präsentiert einen Algorithmus von asymptotisch quasi-optimaler Rechenlaufzeit, der die adaptive Finite Elemente Methode mit einem iterativen algebraischen Eigenwertlöser kombiniert. Der zweite Teil präsentiert explizite beidseitige Schranken für die Eigenwerte des Laplace Operators auf beliebig groben Gittern basierend auf einer Approximation der zugehörigen Eigenfunktion in dem nicht konformen Finite Elemente Raum von Crouzeix und Raviart und einem Postprocessing. Die Effizienz der garantierten Schranke des Eigenwertfehlers hängt von der globalen Gitterweite ab. Der dritte Teil betrachtet eine adaptive Finite Elemente Methode basierend auf Verfeinerungen von Knoten-Patchen. Dieser Algorithmus zeigt eine asymptotische Fehlerreduktion der adaptiven Sequenz von einfachen Eigenwerten und Eigenfunktionen des Laplace Operators. Die hier erstmals bewiesene Eigenschaft der Saturation des Eigenwertfehlers zeigt Zuverlässigkeit und Effizienz für eine Klasse von hierarchischen a posteriori Fehlerschätzern. Der vierte Teil betrachtet a posteriori Fehlerschätzer für Konvektion-Diffusion Eigenwertprobleme, wie sie von Heuveline und Rannacher (2001) im Kontext der dual-gewichteten residualen Methode (DWR) diskutiert wurden. Zwei neue dual-gewichtete a posteriori Fehlerschätzer werden vorgestellt. Der letzte Teil beschäftigt sich mit drei adaptiven Algorithmen für Eigenwertprobleme von nicht selbst-adjungierten Operatoren partieller Differentialgleichungen. Alle drei Algorithmen basieren auf einer Homotopie-Methode die vom einfacheren selbst-adjungierten Problem startet. Neben der Gitterverfeinerung wird der Prozess der Homotopie sowie die Anzahl der Iterationen des algebraischen Löser adaptiv gesteuert und die verschiedenen Anteile am gesamten Fehler ausbalanciert. / This thesis "on the numerical analysis of eigenvalue problems" consists of five major aspects of the numerical analysis of adaptive finite element methods for eigenvalue problems. The first part presents a combined adaptive finite element method with an iterative algebraic eigenvalue solver for a symmetric eigenvalue problem of asymptotic quasi-optimal computational complexity. The second part introduces fully computable two-sided bounds on the eigenvalues of the Laplace operator on arbitrarily coarse meshes based on some approximation of the corresponding eigenfunction in the nonconforming Crouzeix-Raviart finite element space plus some postprocessing. The efficiency of the guaranteed error bounds involves the global mesh-size and is proven for the large class of graded meshes. The third part presents an adaptive finite element method (AFEM) based on nodal-patch refinement that leads to an asymptotic error reduction property for the adaptive sequence of simple eigenvalues and eigenfunctions of the Laplace operator. The proven saturation property yields reliability and efficiency for a class of hierarchical a posteriori error estimators. The fourth part considers a posteriori error estimators for convection-diffusion eigenvalue problems as discussed by Heuveline and Rannacher (2001) in the context of the dual-weighted residual method (DWR). Two new dual-weighted a posteriori error estimators are presented. The last part presents three adaptive algorithms for eigenvalue problems associated with non-selfadjoint partial differential operators. The basis for the developed algorithms is a homotopy method which departs from a well-understood selfadjoint problem. Apart from the adaptive grid refinement, the progress of the homotopy as well as the solution of the iterative method are adapted to balance the contributions of the different error sources.
276

Complexity of Normal Forms on Structures of Bounded Degree

Heimberg, Lucas 04 June 2018 (has links)
Normalformen drücken semantische Eigenschaften einer Logik durch syntaktische Restriktionen aus. Sie ermöglichen es Algorithmen, Grenzen der Ausdrucksstärke einer Logik auszunutzen. Ein Beispiel ist die Lokalität der Logik erster Stufe (FO), die impliziert, dass Graph-Eigenschaften wie Erreichbarkeit oder Zusammenhang nicht FO-definierbar sind. Gaifman-Normalformen drücken die Bedeutung einer FO-Formel als Boolesche Kombination lokaler Eigenschaften aus. Sie haben eine wichtige Rolle in Model-Checking Algorithmen für Klassen dünn besetzter Graphen, deren Laufzeit durch die Größe der auszuwertenden Formel parametrisiert ist. Es ist jedoch bekannt, dass Gaifman-Normalformen im Allgemeinen nur mit nicht-elementarem Aufwand konstruiert werden können. Dies führt zu einer enormen Parameterabhängigkeit der genannten Algorithmen. Ähnliche nicht-elementare untere Schranken sind auch für Feferman-Vaught-Zerlegungen und für die Erhaltungssätze von Lyndon, Łoś und Tarski bekannt. Diese Arbeit untersucht die Komplexität der genannten Normalformen auf Klassen von Strukturen beschränkten Grades, für welche die nicht-elementaren unteren Schranken nicht gelten. Für diese Einschränkung werden Algorithmen mit elementarer Laufzeit für die Konstruktion von Gaifman-Normalformen, Feferman-Vaught-Zerlegungen, und für die Erhaltungssätze von Lyndon, Łoś und Tarski entwickelt, die in den ersten beiden Fällen worst-case optimal sind. Wichtig hierfür sind Hanf-Normalformen. Es wird gezeigt, dass eine Erweiterung von FO durch unäre Zählquantoren genau dann Hanf-Normalformen erlaubt, wenn alle Zählquantoren ultimativ periodisch sind, und wie Hanf-Normalformen in diesen Fällen in elementarer und worst-case optimaler Zeit konstruiert werden können. Dies führt zu Model-Checking Algorithmen für solche Erweiterungen von FO sowie zu Verallgemeinerungen der Algorithmen für Feferman-Vaught-Zerlegungen und die Erhaltungssätze von Lyndon, Łoś und Tarski. / Normal forms express semantic properties of logics by means of syntactical restrictions. They allow algorithms to benefit from restrictions of the expressive power of a logic. An example is the locality of first-order logic (FO), which implies that properties like reachability or connectivity cannot be defined in FO. Gaifman's local normal form expresses the satisfaction conditions of an FO-formula by a Boolean combination of local statements. Gaifman normal form serves as a first step in fixed-parameter model-checking algorithms, parameterised by the size of the formula, on sparse graph classes. However, it is known that in general, there are non-elementary lower bounds for the costs involved in transforming a formula into Gaifman normal form. This leads to an enormous parameter-dependency of the aforementioned algorithms. Similar non-elementary lower bounds also hold for Feferman-Vaught decompositions and for the preservation theorems by Lyndon, Łoś, and Tarski. This thesis investigates the complexity of these normal forms when restricting attention to classes of structures of bounded degree, for which the non-elementary lower bounds are known to fail. Under this restriction, the thesis provides algorithms with elementary and even worst-case optimal running time for the construction of Gaifman normal form and Feferman-Vaught decompositions. For the preservation theorems, algorithmic versions with elementary running time and non-matching lower bounds are provided. Crucial for these results is the notion of Hanf normal form. It is shown that an extension of FO by unary counting quantifiers allows Hanf normal forms if, and only if, all quantifiers are ultimately periodic, and furthermore, how Hanf normal form can be computed in elementary and worst-case optimal time in these cases. This leads to model-checking algorithms for such extensions of FO and also allows generalisations of the constructions for Feferman-Vaught decompositions and preservation theorems.
277

Finacial liberalisation and sustainable economic growth in ECOWAS countries

Owusu, Erasmus Labri 05 1900 (has links)
The thesis examines the comprehensive relationship between all aspects of financial liberalisation and economic growth in three countries from the Economic Community of West African States (ECOWAS). Employing ARDL bounds test approach and real GDP per capita as growth indicator; the thesis finds support in favour of the McKinnon-Shaw hypothesis but also finds that the increases in the subsequent savings and investments have not been transmitted into economic growth in two of the studied countries. Moreover, the thesis also finds that stock market developments have negligible or negative impact on economic growth in two of the selected countries. The thesis concludes that in most cases, it is not financial liberalisation polices that affect economic growth in the selected ECOWAS countries, but rather increase in the productivity of labour, increase in the credit to the private sector, increase in foreign direct investments, increase in the capital stock and increase in government expenditure contrary to expectations. Interestingly, the thesis also finds that export has only negative effect on economic growth in all the selected ECOWAS countries. The thesis therefore, recommends that long-term export diversification programmes be implemented in the ECOWAS regions whilst further investigation is carried on the issue. / Economic Sciences / D. Litt et Phil. (Economics)
278

Financial development and economic growth : new evidence from six countries

Nyasha, Sheilla 10 1900 (has links)
Using 1980 - 2012 annual data, the study empirically investigates the dynamic relationship between financial development and economic growth in three developing countries (South Africa, Brazil and Kenya) and three developed countries (United States of America, United Kingdom and Australia). The study was motivated by the current debate regarding the role of financial development in the economic growth process, and their causal relationship. The debate centres on whether financial development impacts positively or negatively on economic growth and whether it Granger-causes economic growth or vice versa. To this end, two models have been used. In Model 1 the impact of bank- and market-based financial development on economic growth is examined, while in Model 2 it is the causality between the two that is explored. Using the autoregressive distributed lag (ARDL) bounds testing approach to cointegration and error-correction based causality test, the results were found to differ from country to country and over time. These results were also found to be sensitive to the financial development proxy used. Based on Model 1, the study found that the impact of bank-based financial development on economic growth is positive in South Africa and the USA, but negative in the U.K – and neither positive nor negative in Kenya. Elsewhere the results were inconclusive. Market-based financial development was found to impact positively in Kenya, USA and the UK but not in the remaining countries. Based on Model 2, the study found that bank-based financial development Granger-causes economic growth in the UK, while in Brazil they Granger-cause each other. However, in South Africa, Kenya and USA no causal relationship was found. In Australia the results were inconclusive. The study also found that in the short run, market-based financial development Granger-causes economic growth in the USA but that in South Africa and Brazil, the reverse applies. On the other hand bidirectional causality was found to prevail in Kenya in the same period. / Economics / DCOM (Economics)
279

Vers une stratégie robuste et efficace pour le contrôle des calculs par éléments finis en ingénierie mécanique / Towards a robust and effective strategy for the control of finite element computations in mechanical engineering

Pled, Florent 13 December 2012 (has links)
Ce travail de recherche vise à contribuer au développement de nouveaux outils d'estimation d'erreur globale et locale en ingénierie mécanique. Les estimateurs d'erreur globale étudiés reposent sur le concept d'erreur en relation de comportement à travers des techniques spécifiques de construction de champs admissibles, assurant l'aspect conservatif ou garanti de l'estimation. Une nouvelle méthode de construction de champs admissibles est mise en place et comparée à deux autres méthodes concurrentes, en matière de précision, coût de calcul et facilité d'implémentation dans les codes éléments finis. Une amélioration de cette nouvelle méthode hybride fondée sur une minimisation locale de l'énergie complémentaire est également proposée. Celle-ci conduit à l'introduction et à l'élaboration de critères géométriques et énergétiques judicieux, permettant un choix approprié des régions à sélectionner pour améliorer localement la qualité des champs admissibles. Dans le cadre des estimateurs d'erreur locale basés sur l'utilisation conjointe des outils d'extraction et des estimateurs d'erreur globale, deux nouvelles techniques d'encadrement de l'erreur en quantité d'intérêt sont proposées. Celles-ci sont basées sur le principe de Saint-Venant à travers l'emploi de propriétés spécifiques d'homothétie, afin d'améliorer la précision des bornes d'erreur locale obtenues à partir de la technique d'encadrement classique fondée sur l'inégalité de Cauchy-Schwarz. Les diverses études comparatives sont menées dans le cadre des problèmes d'élasticité linéaire en quasi-statique. Le comportement des différents estimateurs d'erreur est illustré et discuté sur des exemples numériques tirés d'applications industrielles. Les travaux réalisés constituent des éléments de réponse à la problématique de la vérification dans un contexte industriel. / This research work aims at contributing to the development of innovative global and goal-oriented error estimation tools applied to Computational Mechanics. The global error estimators considered rely on the concept of constitutive relation error through specific techniques for constructing admissible fields ensuring the recovery of strict and high-quality error estimates. A new hybrid method for constructing admissible stress fields is set up and compared to two other techniques with respect to three different criteria, namely the quality of associated error estimators, the computational cost and the simplicity of practical implementation into finite element codes. An enhanced version of this new technique based on local minimization of the complementary energy is also proposed. Judicious geometric and energetic criteria are introduced to select the relevant zones for optimizing the quality of the admissible fields locally. In the context of goal-oriented error estimation based on the use of both extraction techniques and global error estimators, two new improved bounding techniques are proposed. They lean on Saint-Venant's principle through specific homotheticity properties in order to obtain guaranteed and relevant bounds of better quality than with the classical bounding technique based on the Cauchy-Schwarz inequality. The various comparative studies are conducted on linear elasticity problems under quasi-static loading conditions. The behaviour of the different error estimators is illustrated and discussed through several numerical experiments carried out on industrial cases. The associated results may open up opportunities and help broaden the field of model verification for both academic research and industrial applications.
280

Caractérisation des performances minimales d'estimation pour des modèles d'observations non-standards / Minimal performance analysis for non standard estimation models

Ren, Chengfang 28 September 2015 (has links)
Dans le contexte de l'estimation paramétrique, les performances d'un estimateur peuvent être caractérisées, entre autre, par son erreur quadratique moyenne (EQM) et sa résolution limite. La première quantifie la précision des valeurs estimées et la seconde définit la capacité de l'estimateur à séparer plusieurs paramètres. Cette thèse s'intéresse d'abord à la prédiction de l'EQM "optimale" à l'aide des bornes inférieures pour des problèmes d'estimation simultanée de paramètres aléatoires et non-aléatoires (estimation hybride), puis à l'extension des bornes de Cramér-Rao pour des modèles d'observation moins standards. Enfin, la caractérisation des estimateurs en termes de résolution limite est également étudiée. Ce manuscrit est donc divisé en trois parties :Premièrement, nous complétons les résultats de littérature sur les bornes hybrides en utilisant deux bornes bayésiennes : la borne de Weiss-Weinstein et une forme particulière de la famille de bornes de Ziv-Zakaï. Nous montrons que ces bornes "étendues" sont plus précises pour la prédiction de l'EQM optimale par rapport à celles existantes dans la littérature.Deuxièmement, nous proposons des bornes de type Cramér-Rao pour des contextes d'estimation moins usuels, c'est-à-dire : (i) Lorsque les paramètres non-aléatoires sont soumis à des contraintes d'égalité linéaires ou non-linéaires (estimation sous contraintes). (ii) Pour des problèmes de filtrage à temps discret où l'évolution des états (paramètres) est régit par une chaîne de Markov. (iii) Lorsque la loi des observations est différente de la distribution réelle des données.Enfin, nous étudions la résolution et la précision des estimateurs en proposant un critère basé directement sur la distribution des estimées. Cette approche est une extension des travaux de Oh et Kashyap et de Clark pour des problèmes d'estimation de paramètres multidimensionnels. / In the parametric estimation context, estimators performances can be characterized, inter alia, by the mean square error and the resolution limit. The first quantities the accuracy of estimated values and the second defines the ability of the estimator to allow a correct resolvability. This thesis deals first with the prediction the "optimal" MSE by using lower bounds in the hybrid estimation context (i.e. when the parameter vector contains both random and non-random parameters), second with the extension of Cramér-Rao bounds for non-standard estimation problems and finally to the characterization of estimators resolution. This manuscript is then divided into three parts :First, we fill some lacks of hybrid lower bound on the MSE by using two existing Bayesian lower bounds: the Weiss-Weinstein bound and a particular form of Ziv-Zakai family lower bounds. We show that these extended lower bounds are tighter than the existing hybrid lower bounds in order to predict the optimal MSE.Second, we extend Cramer-Rao lower bounds for uncommon estimation contexts. Precisely: (i) Where the non-random parameters are subject to equality constraints (linear or nonlinear). (ii) For discrete-time filtering problems when the evolution of states are defined by a Markov chain. (iii) When the observation model differs to the real data distribution.Finally, we study the resolution of the estimators when their probability distributions are known. This approach is an extension of the work of Oh and Kashyap and the work of Clark to multi-dimensional parameters estimation problems.

Page generated in 0.051 seconds