811 |
Random Matrix Theory with Applications in Statistics and FinanceSaad, Nadia Abdel Samie Basyouni Kotb 22 January 2013 (has links)
This thesis investigates a technique to estimate the risk of the mean-variance (MV) portfolio optimization problem. We call this technique the Scaling technique. It provides a better estimator of the risk of the MV optimal portfolio. We obtain this result for a general estimator of the covariance matrix of the returns which includes the correlated sampling case as well as the independent sampling case and the exponentially weighted moving average case. This gave rise to the paper, [CMcS].
Our result concerning the Scaling technique relies on the moments of the inverse of compound Wishart matrices. This is an open problem in the theory of random matrices. We actually tackle a much more general setup, where we consider any random matrix provided that its distribution has an appropriate invariance property (orthogonal or unitary) under an appropriate action (by conjugation, or by a left-right action). Our approach is based on Weingarten calculus. As an interesting byproduct of our study - and as a preliminary to the solution of our problem of computing the moments of the inverse of a compound Wishart random matrix, we obtain explicit moment formulas for the pseudo-inverse of Ginibre random matrices. These results are also given in the paper, [CMS].
Using the moments of the inverse of compound Wishart matrices, we obtain asymptotically unbiased estimators of the risk and the weights of the MV portfolio. Finally, we have some numerical results which are part of our future work.
|
812 |
Formal Object Interaction Language: Modeling and Verification of Sequential and Concurrent Object-Oriented SoftwarePamplin, Jason Andrew 03 May 2007 (has links)
As software systems become larger and more complex, developers require the ability to model abstract concepts while ensuring consistency across the entire project. The internet has changed the nature of software by increasing the desire for software deployment across multiple distributed platforms. Finally, increased dependence on technology requires assurance that designed software will perform its intended function. This thesis introduces the Formal Object Interaction Language (FOIL). FOIL is a new object-oriented modeling language specifically designed to address the cumulative shortcomings of existing modeling techniques. FOIL graphically displays software structure, sequential and concurrent behavior, process, and interaction in a simple unified notation, and has an algebraic representation based on a derivative of the π-calculus. The thesis documents the technique in which FOIL software models can be mathematically verified to anticipate deadlocks, ensure consistency, and determine object state reachability. Scalability is offered through the concept of behavioral inheritance; and, FOIL’s inherent support for modeling concurrent behavior and all known workflow patterns is demonstrated. The concepts of process achievability, process complete achievability, and process determinism are introduced with an algorithm for simulating the execution of a FOIL object model using a FOIL process model. Finally, a technique for using a FOIL process model as a constraint on FOIL object system execution is offered as a method to ensure that object-oriented systems modeled in FOIL will complete their processes based activities. FOIL’s capabilities are compared and contrasted with an extensive array of current software modeling techniques. FOIL is ideally suited for data-aware, behavior based systems such as interactive or process management software.
|
813 |
Mathematical analysis of models of non-homogeneous fluids and of hyperbolic equations with low regularity coefficientsFanelli, Francesco 28 May 2012 (has links) (PDF)
The present thesis is devoted both to the study of strictly hyperbolic operators with low regularity coefficients and of the density-dependent incompressible Euler system. On the one hand, we show a priori estimates for a second order strictly hyperbolic operator whose highest order coefficients satisfy a log-Zygmund continuity condition in time and a log-Lipschitz continuity condition with respect to space. Such an estimate involves a time increasing loss of derivatives. Nevertheless, this is enough to recover well-posedness for the associated Cauchy problem in the space $H^infty$ (for suitably smooth second order coefficients).In a first time, we consider acomplete operator in space dimension $1$, whose first order coefficients were assumed Hölder continuous and that of order $0$only bounded. Then, we deal with the general case of any space dimension, focusing on a homogeneous second order operator: the step to higher dimension requires a really different approach. On the other hand, we consider the density-dependent incompressible Euler system. We show its well-posedness in endpoint Besov spaces embedded in the class of globally Lipschitz functions, producing also lower bounds for the lifespan of the solution in terms of initial data only. This having been done, we prove persistence of geometric structures, such as striated and conormal regularity, for solutions to this system. In contrast with the classical case of constant density, even in dimension $2$ the vorticity is not transported by the velocity field. Hence, a priori one can expect to get only local in time results. For the same reason, we also have to dismiss the vortex patch structure. Littlewood-Paley theory and paradifferential calculus allow us to handle these two different problems .A new version of paradifferential calculus, depending on a parameter $ggeq1$, is also needed in dealing with hyperbolic operators with nonregular coefficients. The general framework is that of Besov spaces, which includes in particular Sobolev and Hölder sets. Intermediate classes of functions, of logaritmic type, come into play as well
|
814 |
Fractional Calculus: Definitions and ApplicationsKimeu, Joseph M. 01 April 2009 (has links)
No description available.
|
815 |
A Study of Variable Thrust, Variable Specific Impulse Trajectories for Solar System ExplorationSakai, Tadashi 07 December 2004 (has links)
A study has been performed to determine the advantages and disadvantages of variable thrust and variable specific impulse (Isp) trajectories for solar system exploration.
There have been several numerical research efforts for variable thrust, variable Isp, power-limited trajectory optimization problems. All of these results conclude that variable thrust, variable Isp (variable specific impulse, or VSI) engines are superior to constant thrust, constant Isp (constant specific impulse, or CSI) engines. However, most of these research efforts assume a mission from Earth to Mars, and some of them further assume that these planets are circular and coplanar. Hence they still lack the generality.
This research has been conducted to answer the following questions:
- Is a VSI engine always better than a CSI engine or a high thrust engine for any mission to any planet with any time of flight considering lower propellant mass as the sole criterion?
- If a planetary swing-by is used for a VSI trajectory, is the fuel savings of a VSI swing-by trajectory better than that of a CSI swing-by or high thrust swing-by trajectory?
To support this research, an unique, new computer-based interplanetary trajectory calculation program has been created. This program utilizes a calculus of variations algorithm to perform overall optimization of thrust, Isp, and thrust vector direction along a trajectory that minimizes fuel consumption for interplanetary travel. It is assumed that the propulsion system is power-limited, and thus the compromise between thrust and Isp is a variable to be optimized along the flight path. This program is capable of optimizing not only variable thrust trajectories but also constant thrust trajectories in 3-D space using a planetary ephemeris database. It is also capable of conducting planetary swing-bys.
Using this program, various Earth-originating trajectories have been investigated and the optimized results have been compared to traditional CSI and high thrust trajectory solutions. Results show that VSI rocket engines reduce fuel requirements for any mission compared to CSI rocket engines. Fuel can be saved by applying swing-by maneuvers for VSI engines, but the effects of swing-bys due to VSI engines are smaller than that of CSI or high thrust engines.
|
816 |
Environment Analysis of Higher-Order LanguagesMight, Matthew Brendon 29 June 2007 (has links)
Any analysis of higher-order languages must grapple with the
tri-facetted nature of lambda. In one construct, the fundamental
control, environment and data structures of a language meet and
intertwine. With the control facet tamed nearly two decades ago, this
work brings the environment facet to heel, defining the environment
problem and developing its solution: environment analysis. Environment
analysis allows a compiler to reason about the equivalence of
environments, i.e., name-to-value mappings, that arise during a
program's execution. In this dissertation, two different
techniques-abstract counting and abstract frame strings-make this
possible. A third technique, abstract garbage collection, makes both
of these techniques more precise and, counter to intuition, often
faster as well. An array of optimizations and even deeper analyses
which depend upon environment analysis provide motivation for this
work.
In an abstract interpretation, a single abstract entity represents a
set of concrete entities. When the entities under scrutiny are
bindings-single name-to-value mappings, the atoms of environment-then
determining when the equality of two abstract bindings infers the
equality of their concrete counterparts is the crux of environment
analysis. Abstract counting does this by tracking the size of
represented sets, looking for singletons, in order to apply the
following principle:
If {x} = {y}, then x = y.
Abstract frame strings enable environmental reasoning by statically
tracking the possible stack change between the births of two
environments; when this change is effectively empty, the environments
are equivalent. Abstract garbage collection improves precision by
intermittently removing unreachable environment structure during
abstract interpretation.
|
817 |
Automatic Composition Of Semantic Web Services With The Abductive Event CalculusKirci, Esra 01 September 2008 (has links) (PDF)
In today' / s world, composite web services are widely used in service oriented computing, web mashups and B2B Applications etc. Most of these services are composed manually. However, the complexity of manually composing web services increase exponentially with the increase in the number of available web services, the need for dynamically created/updated/discovered services and the necessity for higher amount of data bindings and type mappings in longer compositions. Therefore, current highly manual web service composition techniques are far from being the answer to web service composition problem. Automatic web service composition methods are recent research efforts to tackle the issues with manual techniques. Broadly, these methods fall into two groups: (i) workflow based methods and (ii) methods using AI planning. This thesis investigates the application of AI planning techniques to the web service composition problem and in particular, it proposes the use of the abductive event calculus in this domain. Web service compositions are defined as templates using OWL-S (" / OWL for Services" / ). These generic composition definitions are converted to Prolog language as axioms for the abductive event calculus planner and solutions found by the planner constitute the specific result plans for the generic composition plan. In this thesis it is shown that abductive planning capabilities of the event calculus can be used to generate the web service composition plans that realize the generic procedure.
|
818 |
Abductive Planning Approach For Automated Web Service Composition Using Only User Specified Inputs And OutputsKuban, Esat Kaan 01 February 2009 (has links) (PDF)
In recent years, web services have become an emerging technology for communication and integration between applications in many areas such as business to business (B2B) or business to commerce (B2C). In this growing technology, it is hard to compose web services manually because of the increasing number and compexity of web services. Therefore, automation of this composition process has gained a considerable amount of popularity. Automated web service composition can be achieved either by generating the composition plan dynamically using given inputs and outputs, or by locating the correct services if an abstract process model is given. This thesis investigates the former method which is dynamicly generating the composition by using the abductive lanning capabilities of the Event Calculus. Event calculus axioms in Prolog language, are generated using the available OWL-S web service descriptions in the service repository, values given to selected inputs from ontologies used by those semantic web services and desired output types selected again from the ontologies. Abductive Theorem Prover which is the AI planner used in this thesis, generates composition plans and execution results according to the generated event calculus axioms. In this thesis, it is shown that abductive event calculus can be used for generating web services composition plans automatically, and returning the results of the generated plans by executing the necessary web services.
|
819 |
A Monolithic Approach To Automated Composition Of Semantic Web Services With The Event CalculusOkutan, Cagla 01 September 2009 (has links) (PDF)
In this thesis, a web service composition and execution framework is presented for semantically
annotated web services. A monolithic approach to automated web service composition
and execution problem is chosen, which provides some benefits by separating the composition
and execution phases. An AI planning method using a logical formalism called Event
Calculus is chosen for the composition phase. This formalism allows one to generate a narrative
of actions and temporal orderings using abductive planning techniques given a goal.
Functional properties of services, namely input/output/precondition/effects(IOPE) are taken
into consideration in the composition phase and non-functional properties, namely quality of
service (QoS) parameters are used in selecting the most appropriate solution to be executed.
The repository of OWL-S semanticWeb services are translated to Event Calculus axioms and
the resulting plans found by the Abductive Event Calculus Planner are converted to graphs.
These graphs can be sorted according to a score calculated using the defined quality of service
parameters of the atomic services in the composition to determine the optimal solution. The
selected graph is converted to an OWL-S file which is executed consequently.
|
820 |
A linearized DPLL calculus with clause learning (2nd, revised version)Arnold, Holger January 2009 (has links)
Many formal descriptions of DPLL-based SAT algorithms either do not include all essential proof techniques applied by modern SAT solvers or are bound to particular heuristics or data structures. This makes it difficult to analyze proof-theoretic properties or the search complexity of these algorithms. In this paper we try to improve this situation by developing a nondeterministic proof calculus that models the functioning of SAT algorithms based on the DPLL calculus with clause learning. This calculus is independent of implementation details yet precise enough to enable a formal analysis of realistic DPLL-based SAT algorithms. / Viele formale Beschreibungen DPLL-basierter SAT-Algorithmen enthalten entweder nicht alle wesentlichen Beweistechniken, die in modernen SAT-Solvern implementiert sind, oder sind an bestimmte Heuristiken oder Datenstrukturen gebunden. Dies erschwert die Analyse beweistheoretischer Eigenschaften oder der Suchkomplexität derartiger Algorithmen. Mit diesem Artikel versuchen wir, diese Situation durch die Entwicklung eines nichtdeterministischen Beweiskalküls zu verbessern, der die Arbeitsweise von auf dem DPLL-Kalkül basierenden SAT-Algorithmen mit Klausellernen modelliert. Dieser Kalkül ist unabhängig von Implementierungsdetails, aber dennoch präzise genug, um eine formale Analyse realistischer DPLL-basierter SAT-Algorithmen zu ermöglichen.
|
Page generated in 0.1792 seconds