11 |
Convex optimization under inexact first-order informationLan, Guanghui 29 June 2009 (has links)
In this thesis we investigate the design and complexity analysis of the algorithms to solve convex programming problems under inexact first-order information. In the first part of this thesis we focus on the general non-smooth convex minimization under a stochastic oracle. We start by introducing an important algorithmic advancement in this area, namely, the development of the mirror descent stochastic approximation algorithm. The main contribution is to develop a validation procedure for this algorithm applied to stochastic programming. In the second part of this thesis we consider the Stochastic Composite
Optimizaiton (SCO) which covers smooth, non-smooth and stochastic convex optimization as certain special cases. Note that the optimization algorithms that can achieve this lower bound had never been developed. Our contribution in this topic mainly consists of the following aspects. Firstly, with a novel analysis, it is demonstrated that the simple RM-SA algorithm applied to the aforementioned problems exhibits the best known so far rate of convergence. Moreover, by adapting Nesterov's optimal method, we propose an accelerated SA, which can achieve, uniformly in dimension, the theoretically optimal rate of convergence for solving this class of problems. Finally, the significant advantages of the accelerated SA over the existing algorithms are illustrated in the context of solving a class of stochastic programming problems. In the
last part of this work, we extend our attention to certain deterministic optimization techniques which operate on approximate first-order information for the dual problem. In particular, we establish, for the first time in the literature, the iteration-complexity for the inexact augmented Lagrangian (I-AL)
methods applied to a special class of convex programming problems.
|
12 |
Design and development of a new time integration framework, GS4-1, and its application to silica particle depositionMasuri, Siti Ujila Binti January 2012 (has links)
Growing interest in the simulation of first order transient systems, typical of those encountered in transient heat conduction, flow transport, and fluid dynamics, has prompted the development of a variety of time integration methods for solving these systems numerically. The primary contribution of this thesis is the design and development of a new time integration/discretization framework, under the class of single
step single solve algorithms which are the most popular, for use in such first order transient systems with computationally attractive features. These include second order accuracy, unconditional stability, zero-order overshoot, and controllable numerical dissipation with a new selective control feature which overcomes the restrictions in the existing and current state-of-the-art methods. Throughout the thesis, we demonstrate the capability and advantage of the newly developed framework, termed GS4-1, in comparison to existing methods using various types of numerical examples (both
linear and nonlinear). The numerical results consistently demonstrate the roles played by the new feature in improving the numerical solutions of both the primary variable
and its time derivative which is important to correctly capture the dynamics of the problems, in contrast to the existing methods without such a feature. Additionally, a breakthrough contribution presented in this thesis is the development of an
isochronous integration framework (iIntegrator), stemming from the novel relations between the newly developed GS4-1 framework and the existing GS4-2 framework (for second order dynamic systems). Such a development enables the use of the same computational framework to solve both first and second order dynamic systems without having to resort to the individual GS4-1 and GS4-2 frameworks; hence the practicality
in the computational and implementation aspects. Finally, the application of the new GS4-1 framework to silica particle deposition, which is a practical problem of interest, is presented with the focus primarily on the physics of the problem. In this part of the thesis, a numerical model of the problem is presented and employed to investigate the effects of the flow and physicochemical parameters on the rate of
deposition. The results of the parametric studies undertaken based on the employed numerical model enable some recommendations for the mitigation of the problem, and therefore serve as additional valuable contribution of the thesis.
|
13 |
NORMALIZATION OF THE DELANO DIAGRAMLópez-López, F. J. 07 1900 (has links)
QC 351 A7 no. 57 / A normalization of the Delano y,ÿ diagram is proposed in which the y heights are normalized by the entrance pupil height, the heights by the image height. The normalization constants are expressed in terms of the system parameters, and it is seen that the reduced distances become normalized by the focal length of the system, the marginal ray reduced angles by the numerical aperture of the system, the chief ray angles by the field aperture, and the powers by the total power of the system. It is also shown that any number of refractions and transfers will not affect this normalization, but a stop or conjugate shift will destroy it and renormalization then becomes necessary.
|
14 |
Algorithmic correspondence and completeness in modal logicConradie, Willem Ernst 06 March 2008 (has links)
Abstract
This thesis takes an algorithmic perspective on the correspondence between modal and hybrid
logics on the one hand, and first-order logic on the other. The canonicity of formulae, and by
implication the completeness of logics, is simultaneously treated.
Modal formulae define second-order conditions on frames which, in some cases, are equiv-
alently reducible to first-order conditions. Modal formulae for which the latter is possible
are called elementary. As is well known, it is algorithmically undecidable whether a given
modal formula defines a first-order frame condition or not. Hence, any attempt at delineating
the class of elementary modal formulae by means of a decidable criterium can only consti-
tute an approximation of this class. Syntactically specified such approximations include the
classes of Sahlqvist and inductive formulae. The approximations we consider take the form
of algorithms.
We develop an algorithm called SQEMA, which computes first-order frame equivalents for
modal formulae, by first transforming them into pure formulae in a reversive hybrid language.
It is shown that this algorithm subsumes the classes of Sahlqvist and inductive formulae, and
that all formulae on which it succeeds are d-persistent (canonical), and hence axiomatize
complete normal modal logics.
SQEMA is extended to polyadic languages, and it is shown that this extension succeeds
on all polyadic inductive formulae. The canonicity result is also transferred.
SQEMA is next extended to hybrid languages. Persistence results with respect to discrete
general frames are obtained for certain of these extensions. The notion of persistence with
respect to strongly descriptive general frames is investigated, and some syntactic sufficient
conditions for such persistence are obtained. SQEMA is adapted to guarantee the persistence
with respect to strongly descriptive frames of the hybrid formulae on which it succeeds, and
hence the completeness of the hybrid logics axiomatized with these formulae. New syntactic
classes of elementary and canonical hybrid formulae are obtained.
Semantic extensions of SQEMA are obtained by replacing the syntactic criterium of nega-
tive/positive polarity, used to determine the applicability of a certain transformation rule, by
its semantic correlate—monotonicity. In order to guarantee the canonicity of the formulae on
which the thus extended algorithm succeeds, syntactically correct equivalents for monotone
formulae are needed. Different version of Lyndon’s monotonicity theorem, which guarantee
the existence of these equivalents, are proved. Constructive versions of these theorems are
also obtained by means of techniques based on bisimulation quantifiers.
Via the standard second-order translation, the modal elementarity problem can be at-
tacked with any second-order quantifier elimination algorithm. Our treatment of this ap-
proach takes the form of a study of the DLS-algorithm. We partially characterize the for-
mulae on which DLS succeeds in terms of syntactic criteria. It is shown that DLS succeeds
in reducing all Sahlqvist and inductive formulae, and that all modal formulae in a single
propositional variable on which it succeeds are canonical.
|
15 |
Hilbert's thesis : some considerations about formalizations of mathematicsBerk, Lon A January 1982 (has links)
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Linguistics and Philosophy, 1982. / MICROFICHE COPY AVAILABLE IN ARCHIVES AND HUMANITIES / Bibliography: leaves 175-176. / by Lon A. Berk. / Ph.D.
|
16 |
Monotone method for nonlocal systems of first orderLiu, Weian January 2005 (has links)
In this paper, the monotone method is extended to the initial-boundary value problems of nonlocal PDE system of first order, both quasi-monotone and non-monotone. A comparison principle is established, and a monotone scheme is given.
|
17 |
Temperature and Polarization Dependence on Holographic Gratings and Its Applications Based on Polymer and Liquid CrystalsHuang, Shuan-Yu 20 July 2005 (has links)
The study of the first-order diffraction efficiency and the mechanism of formation have been investigated on dye-doped liquid crystals (DDLC) and liquid crystals with azo-dye-doped polymer film. The thesis mainly contains three experimental parts by changing the temperature of sample and the polarizations of writing and probing beams. The first part includes the study of temporal profiles of diffraction efficiency for transient gratings and their temperature and polarization dependence in azo-dye-doped liquid crystals. The dynamics of molecular reorientation of transient gratings can be understood by analyzing the build-up time of the peak efficiency and the relaxation decay of the first-order diffraction.
The study of the polarization and temperature dependence allows us to understand the underlying mechanism of laser-induced transient gratings. The second part is concentrated in the diffusion process of photoexcited dye in a planar liquid crystal host. The experiment result reveals that the diffusion coefficient is larger for the molecular director along the grating vector than the perpendicular case and the diffusion will be faster as temperature increases. The third part is focused on the mechanism of formation and the temperature dependence of holographic grating for the liquid crystals with azo-dye-doped polymer film. The temporal profile of the first-order diffraction intensity shows a dip at the temperatures of nematic phase. The dip of the first-order diffraction intensity is temperature dependent and can be explained to be the light scattering due to the photothermal effect. The transient behavior in the dip of transmitted probe beam is also temperature dependent. The surface modulation has been measured by using atomic force microscope (AFM). The depth of surface relief grating of liquid crystals with azo-dye-doped polymer film is deeper than that of azo-dye-doped polymer film and the first-order diffraction efficiency is also larger for the liquid crystals with polymer film.
|
18 |
Reliability methods in dynamic system analysisMunoz, Brad Ernest 26 April 2013 (has links)
Standard techniques used to analyze a system's response with uncertain system parameters or inputs, are generally Importance sampling methods. Sampling methods require a large number of simulation runs before the system
output statistics can be analyzed. As model fidelity increases, sampling techniques become computationally infeasible, and Reliability methods have gained popularity as an analysis method that requires significantly fewer simulation runs. Reliability analysis is an analytic technique which finds a particular point in the design space that can accurately be related to the probability of system failure. However, application to dynamic systems have remained limited.
In the following thesis a First Order Reliability Method (FORM) is used to determine the failure probability of a dynamic system due to system/input uncertainties. A pendulum cart system is used as a case study to demonstrate the FORM on a dynamic system. Three failure modes are discussed which
correspond to the maximum pendulum angle, the maximum system velocity,
and a combined requirement that neither the maximum pendulum angle or system velocity are exceeded. An explicit formulation is generated from the implicit formulation using a Response Surface Methodology, and the FORM is performed using the explicit estimate. Although the analysis converges with minimal simulation computations, attempts to verify FORM results illuminate current limitations of the methodology. The results of this initial study conclude that, currently, sampling techniques are necessary to verify the FORM results, which restricts the potential applications of the FORM methodology. Suggested future work focuses on result verification without the use of Importance sampling which would allow Reliability methods to have widespread applicability. / text
|
19 |
A constructive interpretation of a fragment of first order logic /Lamarche, François. January 1983 (has links)
No description available.
|
20 |
Semantic Integration of Time OntologiesOng, Darren 15 December 2011 (has links)
Here we consider the verification and semantic integration for the set of first-order time ontologies by Allen-Hayes, Ladkin, and van Benthem that axiomatize time as points, intervals, or a combination of both within an ontology repository environment. Semantic integration of the set of time ontologies is explored via the notion of theory interpretations using an automated reasoner as part of the methodology. We use the notion of representation theorems for verification by characterizing the models of the ontology up to isomorphism and proving that they are equivalent to the intended structures for the ontology. Provided is a complete account of the meta-theoretic relationships between ontologies along with corrections to their axioms, translation definitions, proof of representation theorems, and a discussion of various issues such as class-quantified interpretations, the impact of namespacing support for Common Logic, and ontology repository support for semantic integration as related to the time ontologies examined.
|
Page generated in 0.0313 seconds