• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 590
  • 218
  • 79
  • 51
  • 31
  • 16
  • 12
  • 12
  • 9
  • 9
  • 9
  • 8
  • 8
  • 8
  • 7
  • Tagged with
  • 1236
  • 246
  • 195
  • 181
  • 176
  • 137
  • 132
  • 115
  • 104
  • 103
  • 101
  • 92
  • 87
  • 87
  • 85
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
821

Shooting method based algorithms for solving control problems associated with second order hyperbolic PDEs

Luo, Biyong. January 2001 (has links)
Thesis (Ph. D.)--York University, 2001. Graduate Programme in Mathematics. / Typescript. Includes bibliographical references (leaves 114-119). Also available on the Internet. MODE OF ACCESS via web browser by entering the following URL: http://wwwlib.umi.com/cr/yorku/fullcit?pNQ66358.
822

Three Topics in Analysis: (I) The Fundamental Theorem of Calculus Implies that of Algebra, (II) Mini Sums for the Riesz Representing Measure, and (III) Holomorphic Domination and Complex Banach Manifolds Similar to Stein Manifolds

Mathew, Panakkal J 13 May 2011 (has links)
We look at three distinct topics in analysis. In the first we give a direct and easy proof that the usual Newton-Leibniz rule implies the fundamental theorem of algebra that any nonconstant complex polynomial of one complex variable has a complex root. Next, we look at the Riesz representation theorem and show that the Riesz representing measure often can be given in the form of mini sums just like in the case of the usual Lebesgue measure on a cube. Lastly, we look at the idea of holomorphic domination and use it to define a class of complex Banach manifolds that is similar in nature and definition to the class of Stein manifolds.
823

Algorithmique du Network Calculus

Jouhet, Laurent 07 November 2012 (has links) (PDF)
Le Network Calculus est une théorie visant à calculer des bornes pire-cas sur les performances des réseaux de communication. Le réseau est modélisé par un graphe orienté où les noeuds représentent des serveurs, et les flux traversant le réseau doivent suivre les arcs. S'ajoutent à cela des contraintes sur les courbes de trafic (la quantité de données passées par un point depuis la mise en route du réseau) et sur les courbes de service (la quantité de travail fournie par chaque serveur). Pour borner les performances pire-cas, comme la charge en différents points ou les délais de bout en bout, ces enveloppes sont combinées à l'aide d'opérateurs issus notamment des algèbres tropicales : min, +, convolution-(min, +)... Cette thèse est centrée sur l'algorithmique du Network Calculus, à savoir comment rendre effectif ce formalisme. Ce travail nous a amené d'abord à comparer les variations présentes dans la littérature sur les modèles utilisés, révélant des équivalences d'expressivité comme entre le Real-Time Calculus et le Network Calculus. Dans un deuxième temps, nous avons proposé un nouvel opérateur (min, +) pour traiter le calcul de performances en présence d'agrégation de flux, et nous avons étudié le cas des réseaux sans dépendances cycliques sur les flux et avec politique de service quelconque. Nous avons montré la difficulté algorithmique d'obtenir précisément les pires cas, mais nous avons aussi fourni une nouvelle heuristique pour les calculer. Elle s'avère de complexité polynomiale dans des cas intéressants.
824

An APOS exploration of conceptual understanding of the chain rule in calculus by first year engineering students.

Jojo, Zingiswa Mybert Monica. January 2011 (has links)
The main issue in this study is how students conceptualise mathematical learning in the context of calculus with specific reference to the chain rule. The study focuses on how students use the chain rule in finding derivatives of composite functions (including trigonometric ones). The study was based on the APOS (Action-Process-Objects-Schema) approach in exploring conceptual understanding displayed by first year University of Technology students in learning the chain rule in calculus. The study consisted of two phases, both using a qualitative approach. Phase 1 was the pilot study which involved collection of data via questionnaires which were administered to 23 previous semester students of known ability, willing to participate in the study. The questionnaire was then administered to 30 volunteering first year students in Phase 2. A structured way to describe an individual student's understanding of the chain rule was developed and applied to analyzing the evolution of that understanding for each of the 30 first year students. Various methods of data collection were used namely: (1) classroom observations, (2) open-ended questionnaire, (3) semi-structured and unstructured interviews, (4) video-recordings, and (5) written class work, tests and exercises. The research done indicates that it is essential for instructional design to accommodate multiple ways of function representation to enable students to make connections and have a deeper understanding of the concept of the chain rule. Learning activities should include tasks that demand all three techniques, Straight form technique, Link form technique and Leibniz form technique, to cater for the variation in learner preferences. It is believed that the APOS paradigm using selected activities brought the students to the point of being better able to understand the chain rule and informed the teaching strategies for this concept. In this way, it is believed that this conceptualization will enable the formulation of schema of the chain rule which can be applied to a wider range of contexts in calculus. There is a need to establish a conceptual basis that allows construction of a schema of the chain rule. The understanding of the concept with skills can then be augmented by instructional design based on the modified genetic decomposition. This will then subject students to a better understanding of the chain rule and hence more of calculus and its applications. / Thesis (Ph.D.)-University of KwaZulu-Natal, Edgewood, 2011.
825

On probability distributions of diffusions and financial models with non-globally smooth coefficients

De Marco, Stefano 23 November 2010 (has links) (PDF)
Some recent works in the field of mathematical finance have brought new light on the importance of studying the regularity and the tail asymptotics of distributions for certain classes of diffusions with non-globally smooth coefficients. In this Ph.D. dissertation we deal with some issues in this framework. In a first part, we study the existence, smoothness and space asymptotics of densities for the solutions of stochastic differential equations assuming only local conditions on the coefficients of the equation. Our analysis is based on Malliavin calculus tools and on " tube estimates " for Ito processes, namely estimates for the probability that the trajectory of an Ito process remains close to a deterministic curve. We obtain significant estimates of densities and distribution functions in general classes of option pricing models, including generalisations of CIR and CEV processes and Local-Stochastic Volatility models. In the latter case, the estimates we derive have an impact on the moment explosion of the underlying price and, consequently, on the large-strike behaviour of the implied volatility. Parametric implied volatility modeling, in its turn, makes the object of the second part. In particular, we focus on J. Gatheral's SVI model, first proposing an effective quasi-explicit calibration procedure and displaying its performances on market data. Then, we analyse the capability of SVI to generate efficient approximations of symmetric smiles, building an explicit time-dependent parameterization. We provide and test the numerical application to the Heston model (without and with displacement), for which we generate semi-closed expressions of the smile
826

Random Matrix Theory with Applications in Statistics and Finance

Saad, Nadia Abdel Samie Basyouni Kotb 22 January 2013 (has links)
This thesis investigates a technique to estimate the risk of the mean-variance (MV) portfolio optimization problem. We call this technique the Scaling technique. It provides a better estimator of the risk of the MV optimal portfolio. We obtain this result for a general estimator of the covariance matrix of the returns which includes the correlated sampling case as well as the independent sampling case and the exponentially weighted moving average case. This gave rise to the paper, [CMcS]. Our result concerning the Scaling technique relies on the moments of the inverse of compound Wishart matrices. This is an open problem in the theory of random matrices. We actually tackle a much more general setup, where we consider any random matrix provided that its distribution has an appropriate invariance property (orthogonal or unitary) under an appropriate action (by conjugation, or by a left-right action). Our approach is based on Weingarten calculus. As an interesting byproduct of our study - and as a preliminary to the solution of our problem of computing the moments of the inverse of a compound Wishart random matrix, we obtain explicit moment formulas for the pseudo-inverse of Ginibre random matrices. These results are also given in the paper, [CMS]. Using the moments of the inverse of compound Wishart matrices, we obtain asymptotically unbiased estimators of the risk and the weights of the MV portfolio. Finally, we have some numerical results which are part of our future work.
827

Green's Functions of Discrete Fractional Calculus Boundary Value Problems and an Application of Discrete Fractional Calculus to a Pharmacokinetic Model

Charoenphon, Sutthirut 01 May 2014 (has links)
Fractional calculus has been used as a research tool in the fields of pharmacology, biology, chemistry, and other areas [3]. The main purpose of this thesis is to calculate Green's functions of fractional difference equations, and to model problems in pharmacokinetics. We claim that the discrete fractional calculus yields the best prediction performance compared to the continuous fractional calculus in the application of a one-compartmental model of drug concentration. In Chapter 1, the Gamma function and its properties are discussed to establish a theoretical basis. Additionally, the basics of discrete fractional calculus are discussed using particular examples for further calculations. In Chapter 2, we use these basic results in the analysis of a linear fractional difference equation. Existence of solutions to this difference equation is then established for both initial conditions (IVP) and two-point boundary conditions (BVP). In Chapter 3, Green's functions are introduced and discussed, along with examples. Instead of using Cauchy functions, the technique of finding Green's functions by a traditional method is demonstrated and used throughout this chapter. The solutions of the BVP play an important role in analysis and construction of the Green's functions. Then, Green's functions for the discrete calculus case are calculated using particular problems, such as boundary value problems, discrete boundary value problems (DBVP) and fractional boundary value problems (FBVP). Finally, we demonstrate how the Green's functions of the FBVP generalize the existence results of the Green's functions of DVBP. In Chapter 4, different compartmental pharmacokinetic models are discussed. This thesis limits discussion to the one-compartmental model. The Mathematica FindFit command and the statistical computational techniques of mean square error (MSE) and cross-validation are discussed. Each of the four models (continuous, continuous fractional, discrete and discrete fractional) is used to compute the MSE numerically with the given data of drug concentration. Then, the best fit and the best model are obtained by inspection of the resulting MSE. In the last Chapter, the results are summarized, conclusions are drawn, and directions for future work are stated.
828

Proof system for logic of correlated knowledge / Įrodymų sistema koreliatyvių žinių logikai

Giedra, Haroldas 30 December 2014 (has links)
Automated proof system for logic of correlated knowledge is presented in the dissertation. The system consists of the sequent calculus GS-LCK and the proof search procedure GS-LCK-PROC. Sequent calculus is sound, complete and satisfy the properties of invertibility of rules, admissibility of weakening, contraction and cut. The procedure GS-LCK-PROC is terminating and allows to check if the sequent is provable. Also decidability of logic of correlated knowledge has been proved. Using the terminating procedure GS-LCK-PROC the validity of all formulas of logic of correlated knowledge can be checked. / Automatinė įrodymų sistema koreliatyvių žinių logikai yra pristatoma disertacijoje. Sistemą sudaro sekvencinis skaičiavimas GS-LCK ir įrodymo paieškos procedūra GS-LCK-PROC. Sekvencinis skaičiavimas yra pagrįstas, pilnas ir tenkina taisyklių apverčiamumo, silpninimo, prastinimo ir pjūvio leistinumo savybes. Procedūra GS-LCK-PROC yra baigtinė ir leidžia patikrinti, ar sekvencija yra išvedama. Taip pat buvo įrodytas koreliatyvių žinių logikos išsprendžiamumas. Naudojant baigtinę procedūra GS-LCK-PROC, visų koreliatyvių žinių logikos formulių tapatus teisingumas gali būti patikrintas.
829

Constructing quantum spacetime : relation to classical gravity

Steinhaus, Sebastian January 2014 (has links)
Despite remarkable progress made in the past century, which has revolutionized our understanding of the universe, there are numerous open questions left in theoretical physics. Particularly important is the fact that the theories describing the fundamental interactions of nature are incompatible. Einstein's theory of general relative describes gravity as a dynamical spacetime, which is curved by matter and whose curvature determines the motion of matter. On the other hand we have quantum field theory, in form of the standard model of particle physics, where particles interact via the remaining interactions - electromagnetic, weak and strong interaction - on a flat, static spacetime without gravity. A theory of quantum gravity is hoped to cure this incompatibility by heuristically replacing classical spacetime by quantum spacetime'. Several approaches exist attempting to define such a theory with differing underlying premises and ideas, where it is not clear which is to be preferred. Yet a minimal requirement is the compatibility with the classical theory, they attempt to generalize. Interestingly many of these models rely on discrete structures in their definition or postulate discreteness of spacetime to be fundamental. Besides the direct advantages discretisations provide, e.g. permitting numerical simulations, they come with serious caveats requiring thorough investigation: In general discretisations break fundamental diffeomorphism symmetry of gravity and are generically not unique. Both complicates establishing the connection to the classical continuum theory. The main focus of this thesis lies in the investigation of this relation for spin foam models. This is done on different levels of the discretisation / triangulation, ranging from few simplices up to the continuum limit. In the regime of very few simplices we confirm and deepen the connection of spin foam models to discrete gravity. Moreover, we discuss dynamical, e.g. diffeomorphism invariance in the discrete, to fix the ambiguities of the models. In order to satisfy these conditions, the discrete models have to be improved in a renormalisation procedure, which also allows us to study their continuum dynamics. Applied to simplified spin foam models, we uncover a rich, non--trivial fixed point structure, which we summarize in a phase diagram. Inspired by these methods, we propose a method to consistently construct the continuum theory, which comes with a unique vacuum state. / Trotz bemerkenswerter Fortschritte im vergangenen Jahrhundert, die unser Verständnis des Universums revolutioniert haben, gibt es noch zahlreiche ungeklärte Fragen in der theoretischen Physik. Besondere Bedeutung kommt der Tatsache zu, dass die Theorien, welche die fundamentalen Wechselwirkungen der Natur beschreiben, inkompatibel sind. Nach Einsteins allgemeiner Relativitätstheorie wird die Gravitation durch eine dynamische Raumzeit dargestellt, die von Materie gekrümmt wird und ihrerseits durch die Krümmung die Bewegung der Materie bestimmt. Dem gegenüber steht die Quantenfeldtheorie, die die verbliebenen Wechselwirkungen - elektromagnetische, schwache und starke Wechselwirkung - im Standardmodell der Teilchenphysik beschreibt, in dem Teilchen auf einer statischen Raumzeit -- ohne Gravitation -- miteinander interagieren. Die Hoffnung ist, dass eine Theorie der Quantengravitation diese Inkompatibilität beheben kann, indem, heuristisch, die klassische Raumzeit durch eine 'Quantenraumzeit' ersetzt wird. Es gibt zahlreiche Ansätze eine solche Theorie zu definieren, die auf unterschiedlichen Prämissen und Ideen beruhen, wobei a priori nicht klar ist, welche zu bevorzugen sind. Eine Minimalanforderung an diese Theorien ist Kompatibilität mit der klassischen Theorie, die sie verallgemeinern sollen. Interessanterweise basieren zahlreiche Modelle in ihrer Definition auf Diskretisierungen oder postulieren eine fundamentale Diskretheit der Raumzeit. Neben den unmittelbaren Vorteilen, die Diskretisierungen bieten, z.B. das Ermöglichen numerischer Simulationen, gibt es auch gravierende Nachteile, die einer ausführlichen Untersuchung bedürfen: Im Allgemeinen brechen Diskretisierungen die fundamentale Diffeomorphismensymmetrie der Gravitation und sind in der Regel nicht eindeutig definiert. Beides erschwert die Wiederherstellung der Verbindung zur klassischen, kontinuierlichen Theorie. Das Hauptaugenmerk dieser Doktorarbeit liegt darin diese Verbindung insbesondere für Spin-Schaum-Modelle (spin foam models) zu untersuchen. Dies geschieht auf sehr verschiedenen Ebenen der Diskretisierung / Triangulierung, angefangen bei wenigen Simplizes bis hin zum Kontinuumslimes. Im Regime weniger Simplizes wird die bekannte Verbindung von Spin--Schaum--Modellen zu diskreter Gravitation bestätigt und vertieft. Außerdem diskutieren wir dynamische Prinzipien, z.B. Diffeomorphismeninvarianz im Diskreten, um die Ambiguitäten der Modelle zu fixieren. Um diese Bedingungen zu erfüllen, müssen die diskreten Modelle durch Renormierungsverfahren verbessert werden, wodurch wir auch ihre Kontinuumsdynamik untersuchen können. Angewandt auf vereinfachte Spin-Schaum-Modelle finden wir eine reichhaltige, nicht-triviale Fixpunkt-Struktur, die wir in einem Phasendiagramm zusammenfassen. Inspiriert von diesen Methoden schlagen wir zu guter Letzt eine konsistente Konstruktionsmethode für die Kontinuumstheorie vor, die einen eindeutigen Vakuumszustand definiert.
830

A Reasoning Module for Long-lived Cognitive Agents

Vassos, Stavros 03 March 2010 (has links)
In this thesis we study a reasoning module for agents that have cognitive abilities, such as memory, perception, action, and are expected to function autonomously for long periods of time. The module provides the ability to reason about action and change using the language of the situation calculus and variants of the basic action theories. The main focus of this thesis is on the logical problem of progressing an action theory. First, we investigate the conjecture by Lin and Reiter that a practical first-order definition of progression is not appropriate for the general case. We show that Lin and Reiter were indeed correct in their intuitions by providing a proof for the conjecture, thus resolving the open question about the first-order definability of progression and justifying the need for a second-order definition. Then we proceed to identify three cases where it is possible to obtain a first-order progression with the desired properties: i) we extend earlier work by Lin and Reiter and present a case where we restrict our attention to a practical class of queries that may only quantify over situations in a limited way; ii) we revisit the local-effect assumption of Liu and Levesque that requires that the effects of an action are fixed by the arguments of the action and show that in this case a first-order progression is suitable; iii) we investigate a way that the local-effect assumption can be relaxed and show that when the initial knowledge base is a database of possible closures and the effects of the actions are range-restricted then a first-order progression is also suitable under a just-in-time assumption. Finally, we examine a special case of the action theories with range-restricted effects and present an algorithm for computing a finite progression. We prove the correctness and the complexity of the algorithm, and show its application in a simple example that is inspired by video games.

Page generated in 0.0493 seconds