• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 243
  • 59
  • 41
  • 15
  • 15
  • 8
  • 7
  • 6
  • 5
  • 4
  • 2
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 477
  • 69
  • 63
  • 60
  • 57
  • 47
  • 39
  • 35
  • 34
  • 33
  • 32
  • 31
  • 30
  • 28
  • 27
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
331

Modeling and verifying dynamic evolving service-oriented architectures

Giese, Holger, Becker, Basil January 2013 (has links)
The service-oriented architecture supports the dynamic assembly and runtime reconfiguration of complex open IT landscapes by means of runtime binding of service contracts, launching of new components and termination of outdated ones. Furthermore, the evolution of these IT landscapes is not restricted to exchanging components with other ones using the same service contracts, as new services contracts can be added as well. However, current approaches for modeling and verification of service-oriented architectures do not support these important capabilities to their full extend.In this report we present an extension of the current OMG proposal for service modeling with UML - SoaML - which overcomes these limitations. It permits modeling services and their service contracts at different levels of abstraction, provides a formal semantics for all modeling concepts, and enables verifying critical properties. Our compositional and incremental verification approach allows for complex properties including communication parameters and time and covers besides the dynamic binding of service contracts and the replacement of components also the evolution of the systems by means of new service contracts. The modeling as well as verification capabilities of the presented approach are demonstrated by means of a supply chain example and the verification results of a first prototype are shown. / Service-Orientierte Architekturen erlauben die dynamische Zusammensetzung und Rekonfiguration komplexer, offener IT Landschaften durch Bindung von Service Contracts zur Laufzeit, starten neuer Komponenten und beenden von veralteten. Die Evolution dieser Systeme ist nicht auf den Austausch von Komponenten-Implementierungen bei Beibehaltung der Service-Contracts beschränkt, sondern das Hinzufügen neuer Service-Contracts wird ebenfalls unterstützt. Aktuelle Ansätze zur Modellierung und Verifikation Service-Orientierter Architekturen unterstützen diese wichtigen Eigenschaften, wenn überhaupt, nur unvollständig. In diesem Bericht stellen wir eine Erweiterung des aktuellen OMG Vorschlags zur Service Modellierung mit UML - SoaML - vor, die diese Einschränkungen aufhebt. Unser Ansatz erlaubt die Modellierung von Service Contracts auf verschiedenen Abstraktionsniveaus, besitzt eine fundierte formale Semantik für alle eingeführten Modellierungskonzepte und erlaubt die Verifikation kritischer Eigenschaften. Unser kompositionaler und inkrementeller Verifikationsansatz erlaubt die Verifikation komplexer Eigenschaften einschließlich Kommunikationsparameter und Zeit und deckt neben der dynamischen Bindung von Service Contracts sowie dem Austausch von Komponenten auch die Evolution des gesamten Systems durch das Hinzufügen neuer Service Contracts ab. Die Modellierungs- als auch die Verifikationsfähigkeiten unseres vorgestellten Ansatzes werden durch ein Anwendungsbeispiel aus dem Bereich des Lieferkettenmanagements veranschaulicht.
332

Applicability of deterministic global optimization to the short-term hydrothermal coordination problem

Ferrer Biosca, Alberto 30 March 2004 (has links)
Esta Tesis esta motivada por el interés en aplicar procedimientos de optimización global a problemas del mundo real. Para ello, nos hemos centrado en el problema de Coordinación Hidrotérmica de la Generación Eléctrica a Corto Plazo (llamado Problema de Generación en esta Tesis) donde la función objetivo y las restricciones no lineales son polinomios de grado como máximo cuatro. En el Problema de Generación no tenemos disponible una representación en diferencia convexa de las funciones involucradas ni tampoco es posible utilizar la estructura del problema para simplificarlo. No obstante, cuando disponemos de una función continua f(x) definida en un conjunto cerrado y no vacío S el problema puede transformarse en otro equivalente expresado mediante minimize l(z) subject to z 2 D n int. (programa d.c. canónico), donde l(z) es una función convexa (en general suele ser una función lineal) con D y C conjuntos convexos y cerrados. Una estructura matemática tal como Dnint C no resulta siempre aparente y aunque lo fuera siempre queda por realizar una gran cantidad de cálculos para expresarla de manera que se pueda resolver el problema de una manera eficiente desde un punto de vista computacional.La característica más importante de esta estructura es que aparecen conjuntos convexos y complementarios de conjuntos convexos. Por este motivo en tales problemas se pueden usar herramientas analíticas tales como subdifernciales y hiperplanos soporte. Por otro lado, como aparecen conjuntos complementarios de conjuntos convexos, estas herramientas analíticas se deben usar de una manera determinada y combinándolas con herramientas combinatorias tales como cortes por planos, Branco and bound y aproximación interior.En esta tesis se pone de manifiesto la estructura matemática subyacente en el Problema de Generación utilizando el hecho de que los polinomios son expresables como diferencia de funciones convexas. Utilizando esta propiedad describimos el problema como un programa d.c. canónico equivalente. Pero aun mas, partiendo de la estructura de las funciones del Problema de Generación es posible rescribirlo de una manera mas conveniente y obtener de este modo ventajas numéricas desde elpunto de vista de la implementación.Basándonos en la propiedad de que los polinomios homogéneos de grado 1 son un conjunto de generadores del espacio vectorial de los polinomios homogéneos de grado m hemos desarrollamos los conceptos y propiedades necesarios que nos permiten expresar un polinomio cualquiera como diferencia de polinomios convexos, También, se ha desarrollado y demostrado la convergencia de un nuevo algoritmo de optimización global (llamado Algoritmo Adaptado) que permite resolver el Problema de Generación. Como el programa equivalente no esta acotado se ha introducido una técnica de subdivisión mediante prismas en lugar de la habitual subdivisión mediante conos.Para obtener una descomposición óptima de un polinomio en diferencia de polinomios convexos, se ha enunciado el Problema de Norma Mínima mediante la introducción del concepto de Descomposición con Mínima Desviación, con lo cual obtenemos implementaciones m´as eficientes, al reducir el n´umero de iteraciones del Algoritmo Adaptado. Para resolver el problema de Norma Mínima hemos implementado un algoritmo de programación cuadrática semi-infinita utilizando una estrategia de build-up and build-down, introducida por Den Hertog (1997) para resolver programas lineales semi-infinitos, la cual usa un procedimiento de barrera logarítmica.Finalmente, se describen los resultados obtenidos por la implementación de los algoritmos anteriormente mencionados y se dan las conclusiones. / This Thesis has been motivated by the interest in applying deterministic global optimization procedures to problems in the real world with no special structures. We have focused on the Short-Term Hydrothermal Coordination of Electricity Generation Problem (also named Generation Problem in this Thesis) where the objective function and the nonlinear constraints are polynomials of degree up to four. In the Generation Problem there is no available d.c. representation of the involved functions and we cannot take advantage of any special structure of the problem either. Hence, a very general problem, such as the above-mentioned, does not seem to have any mathematical structure conducive to computational implementations. Nevertheless, when f(x) is a continuous function and S is a nonempty closed set the problem can be transformed into an equivalent problem expressed by minimize l(z) subject to z 2 D n intC (canonical d.c. program), where l(z) is a convex function (which is usually a linear function) and D and C are closed convex sets. A mathematical complementary convex structure such as D n int C is not always apparent and even when it is explicit, a lot of work still remains to be done to bring it into a form amenable to efficient computational implementations. The attractive feature of the mathematicalcomplementary convex structure is that it involves convexity. Thus, we can use analytical tools from convex analysis like sub differential and supporting hyper plane.On the other hand, since convexity is involved in a reverse sense, these tools must be used in some specific way and combined with combinatorial tools like cutting planes, branch and bound and outer approximation.We introduce the common general mathematical complementary convex structure underlying in global optimization problems and describe the Generation Problem, whose functions are d.c. functions because they are polynomials. Thus, by using the properties of the d.c. functions, we describe the Generation Problem as an equivalent canonical d.c. programming problem. From the structure of its functions the Generation Problem can be rewritten as a more suitable equivalent reverse convex program in order to obtain an adaptation for advantageous numerical implementations.Concepts and properties are introduced which allow us to obtain an explicit representation of a polynomial as a deference of convex polynomials, based on the fact that the set of mth powers of homogeneous polynomials of degree 1 is a generating set for the vector space of homogeneous polynomials of degree m.We also describe a new global optimization algorithm (adapted algorithm) in order to solve the Generation Problem. Since the equivalent reverse convex program is unbounded we use prismatical subdivisions instead of conical ones. Moreover, we prove the convergence of the adapted algorithm by using a prismatical subdivision process together with an outer approximation procedure.We enounce the Minimal Norm Problem by using the concept of Least Deviation Decomposition in order to obtain the optimal d.c. representation of a polynomial function, which allows a more efficient implementation, by reducing the number of iterations of the adapted algorithm.A quadratic semi-infinite algorithm is described. We propose a build-up and down strategy, introduced by Den Hertog (1997) for standard linear programs that uses a logarithmic barrier method.Finally, computational results are given and conclusions are explained.
333

Validated Continuation for Infinite Dimensional Problems

Lessard, Jean-Philippe 07 August 2007 (has links)
Studying the zeros of a parameter dependent operator F defined on a Hilbert space H is a fundamental problem in mathematics. When the Hilbert space is finite dimensional, continuation provides, via predictor-corrector algorithms, efficient techniques to numerically follow the zeros of F as we move the parameter. In the case of infinite dimensional Hilbert spaces, this procedure must be applied to some finite dimensional approximation which of course raises the question of validity of the output. We introduce a new technique that combines the information obtained from the predictor-corrector steps with ideas from rigorous computations and verifies that the numerically produced zero for the finite dimensional system can be used to explicitly define a set which contains a unique zero for the infinite dimensional problem F: HxR->Im(F). We use this new validated continuation to study equilibrium solutions of partial differential equations, to prove the existence of chaos in ordinary differential equations and to follow branches of periodic solutions of delay differential equations. In the context of partial differential equations, we show that the cost of validated continuation is less than twice the cost of the standard continuation method alone.
334

Tracing The Footsteps Of The Young Leibniz In The Labyrinth Of The Continuum

Ebeturk, Emre 01 September 2008 (has links) (PDF)
This study is an attempt to explicate Gottfried Wilhelm von Leibniz&rsquo / s search for a way out of the labyrinth of the continuum in his early years of philosophizing. The main motive of the study is the belief that it would be worthwhile to see how Leibniz initially goes into the labyrinth and comes across with the riddles contained in it. Accordingly, this thesis is intended to discuss what the problem of the composition of the continuum is for the young Leibniz, which concepts and metaphysical problems are associated with the labyrinth, and what particular difficulties challenge Leibniz in his struggle. More importantly, the study seeks to delineate how Leibniz responds to these difficulties, what kinds of solutions he suggests, and how and why he changes his mind and offers different accounts concerning the composition of the continuum in his early writings. In this search for a way out of the labyrinth, some of the early writings of Leibniz written between 1666 and 1675 were studied with a particular emphasis on those directly related with the labyrinth of the continuum. During the study, the differences and transitions between geometrical, physical, and metaphysical accounts concerning the problem of the composition of the continuum were examined with a special focus on the bridging role of &lsquo / motion&rsquo / and the notion of &lsquo / conatus.&rsquo
335

The Isoperimetric Problem On Trees And Bounded Tree Width Graphs

Bharadwaj, Subramanya B V 09 1900 (has links)
In this thesis we study the isoperimetric problem on trees and graphs with bounded treewidth. Let G = (V,E) be a finite, simple and undirected graph. For let δ(S,G)= {(u,v) ε E : u ε S and v ε V – S }be the edge boundary of S. Given an integer i, 1 ≤ i ≤ | V| , let the edge isoperimetric value of G at I be defined as be(i,G)= mins v;|s|= i | δ(S,G)|. For S V, let φ(S,G) = {u ε V – S : ,such that be the vertex boundary of S. Given an integer i, 1 ≤ i ≤ | V| , let the vertex isoperimetric value of G at I be defined as bv(i,G)= The edge isoperimetric peak of G is defined as be(G) =. Similarly the vertex isoperimetric peak of G is defined as bv(G)= .The problem of determining a lower bound for the vertex isoperimetric peak in complete k-ary trees of depth d,Tdkwas recently considered in[32]. In the first part of this thesis we provide lower bounds for the edge and vertex isoperimetric peaks in complete k-ary trees which improve those in[32]. Our results are then generalized to arbitrary (rooted)trees. Let i be an integer where . For each i define the connected edge isoperimetric value and the connected vertex isoperimetric value of G at i as follows: is connected and is connected A meta-Fibonacci sequence is given by the reccurence a(n)= a(x1(n)+ a1′(n-1))+ a(x2(n)+ a2′(n -2)), where xi: Z+ → Z+ , i =1,2, is a linear function of n and ai′(j)= a(j) or ai′(j)= -a(j), for i=1,2. Sequences belonging to this class have been well studied but in general their properties remain intriguing. In the second part of the thesis we show an interesting connection between the problem of determining and certain meta-Fibonacci sequences. In the third part of the thesis we study the problem of determining be and bv algorithmically for certain special classes of graphs. Definition 0.1. A tree decomposition of a graph G = (V,E) is a pair where I is an index set, is a collection of subsets of V and T is a tree whose node set is I such that the following conditions are satisfied: (For mathematical equations pl see the pdf file)
336

A computational framework for the solution of infinite-dimensional Bayesian statistical inverse problems with application to global seismic inversion

Martin, James Robert, Ph. D. 18 September 2015 (has links)
Quantifying uncertainties in large-scale forward and inverse PDE simulations has emerged as a central challenge facing the field of computational science and engineering. The promise of modeling and simulation for prediction, design, and control cannot be fully realized unless uncertainties in models are rigorously quantified, since this uncertainty can potentially overwhelm the computed result. While statistical inverse problems can be solved today for smaller models with a handful of uncertain parameters, this task is computationally intractable using contemporary algorithms for complex systems characterized by large-scale simulations and high-dimensional parameter spaces. In this dissertation, I address issues regarding the theoretical formulation, numerical approximation, and algorithms for solution of infinite-dimensional Bayesian statistical inverse problems, and apply the entire framework to a problem in global seismic wave propagation. Classical (deterministic) approaches to solving inverse problems attempt to recover the “best-fit” parameters that match given observation data, as measured in a particular metric. In the statistical inverse problem, we go one step further to return not only a point estimate of the best medium properties, but also a complete statistical description of the uncertain parameters. The result is a posterior probability distribution that describes our state of knowledge after learning from the available data, and provides a complete description of parameter uncertainty. In this dissertation, a computational framework for such problems is described that wraps around the existing forward solvers, as long as they are appropriately equipped, for a given physical problem. Then a collection of tools, insights and numerical methods may be applied to solve the problem, and interrogate the resulting posterior distribution, which describes our final state of knowledge. We demonstrate the framework with numerical examples, including inference of a heterogeneous compressional wavespeed field for a problem in global seismic wave propagation with 10⁶ parameters.
337

Three essays on valuation and investment in incomplete markets

Ringer, Nathanael David 01 June 2011 (has links)
Incomplete markets provide many challenges for both investment decisions and valuation problems. While both problems have received extensive attention in complete markets, there remain many open areas in the theory of incomplete markets. We present the results in three parts. In the first essay we consider the Merton investment problem of optimal portfolio choice when the traded instruments are the set of zero-coupon bonds. Working within a Markovian Heath-Jarrow-Morton framework of the interest rate term structure driven by an infinite dimensional Wiener process, we give sufficient conditions for the existence and uniqueness of an optimal investment strategy. When there is uniqueness, we provide a characterization of the optimal portfolio. Furthermore, we show that a specific Gauss-Markov random field model can be treated within this framework, and explicitly calculate the optimal portfolio. We show that the optimal portfolio in this case can be identified with the discontinuities of a certain function of the market parameters. In the second essay we price a claim, using the indifference valuation methodology, in the model presented in the first section. We appeal to the indifference pricing framework instead of the classic Black-Scholes method due to the natural incompleteness in such a market model. Because we price time-sensitive interest rate claims, the units in which we price are very important. This will require us to take care in formulating the investor’s utility function in terms of the units in which we express the wealth function. This leads to new results, namely a general change-of-numeraire theorem in incomplete markets via indifference pricing. Lastly, in the third essay, we propose a method to price credit derivatives, namely collateralized debt obligations (CDOs) using indifference. We develop a numerical algorithm for pricing such CDOs. The high illiquidity of the CDO market coupled with the allowance of default in the underlying traded assets creates a very incomplete market. We explain the market-observed prices of such credit derivatives via the risk aversion of investors. In addition to a general algorithm, several approximation schemes are proposed. / text
338

Creating Correct Network Protocols

Wibling, Oskar January 2008 (has links)
Network protocol construction is a complex and error prone task. The challenges originate both from the inherent complexity of developing correct program code and from the distributed nature of networked systems. Protocol errors can have devastating consequences. Even so, methods for ensuring protocol correctness are currently only used to a limited extent. A central reason for this is that they are often complex and expensive to employ. In this thesis, we develop methods to perform network protocol testing and verification, with the goal to make the techniques more accessible and readily adoptable. We examine how to formulate correctness requirements for ad hoc routing protocols used to set up forwarding paths in wireless networks. Model checking is a way to verify such requirements automatically. We investigate scalability of finite-state model checking, in terms of network size and topological complexity, and devise a manual abstraction technique to improve scalability. A methodology combining simulations, emulations, and real world experiments is developed for analyzing the performance of wireless protocol implementations. The technique is applied in a comparison of the ad hoc routing protocols AODV, DSR, and OLSR. Discrepancies between simulations and real world behavior are identified; these are due to absence of realistic radio propagation and mobility models in simulation. The issues are mainly related to how the protocols sense their network surroundings and we identify improvements to these capabilities. Finally, we develop a methodology and a tool for automatic verification of safety properties of infinite-state network protocols, modeled as graph transformation systems extended with negative application conditions. The verification uses symbolic backward reachability analysis. By introducing abstractions in the form of summary nodes, the method is extended to protocols with recursive data structures. Our tool automatically verifies correct routing of the DYMO ad hoc routing protocol and several nontrivial heap manipulating programs.
339

Immobilisierung von Palladium mittels 1,4-Bis-(4‘-pyrazolyl)benzen und dessen Anwendung in der heterogenen Katalyse

Liebold, Claudia 08 November 2013 (has links) (PDF)
Die Immobilisierung homogener Katalysatoren ist eine wichtige Methode zur Realisierung der Abtrennbarkeit und Wiederverwendbarkeit aktiver Spezies. Im Rahmen dieser Arbeit wurde durch die Komplexierung von Palladium mit 1,4-Bis-(4′-pyrazolyl)benzen ein neues mikroporöses Koordinationspolymer generiert und dieses als heterogener Katalysator in der Suzuki-Miyaura-Kreuzkupplungsreaktion erfolgreich eingesetzt. Dabei konnten vollständige Umsätze und hohe Selektivitäten erzielt werden, die vergleichbar zu bereits kommerziell erhältlichen homogenen Katalysatoren sind. Die Besonderheit des Katalysators ist, neben dessen außergewöhnlich hohen chemischen Stabilität, die Variation seiner Struktureigenschaften durch die Wahl der Synthesebedingungen und die damit verbundene Steuerung seiner katalytischen Aktivität.
340

Variable Selection and Function Estimation Using Penalized Methods

Xu, Ganggang 2011 December 1900 (has links)
Penalized methods are becoming more and more popular in statistical research. This dissertation research covers two major aspects of applications of penalized methods: variable selection and nonparametric function estimation. The following two paragraphs give brief introductions to each of the two topics. Infinite variance autoregressive models are important for modeling heavy-tailed time series. We use a penalty method to conduct model selection for autoregressive models with innovations in the domain of attraction of a stable law indexed by alpha is an element of (0, 2). We show that by combining the least absolute deviation loss function and the adaptive lasso penalty, we can consistently identify the true model. At the same time, the resulting coefficient estimator converges at a rate of n^(?1/alpha) . The proposed approach gives a unified variable selection procedure for both the finite and infinite variance autoregressive models. While automatic smoothing parameter selection for nonparametric function estimation has been extensively researched for independent data, it is much less so for clustered and longitudinal data. Although leave-subject-out cross-validation (CV) has been widely used, its theoretical property is unknown and its minimization is computationally expensive, especially when there are multiple smoothing parameters. By focusing on penalized modeling methods, we show that leave-subject-out CV is optimal in that its minimization is asymptotically equivalent to the minimization of the true loss function. We develop an efficient Newton-type algorithm to compute the smoothing parameters that minimize the CV criterion. Furthermore, we derive one simplification of the leave-subject-out CV, which leads to a more efficient algorithm for selecting the smoothing parameters. We show that the simplified version of CV criteria is asymptotically equivalent to the unsimplified one and thus enjoys the same optimality property. This CV criterion also provides a completely data driven approach to select working covariance structure using generalized estimating equations in longitudinal data analysis. Our results are applicable to additive, linear varying-coefficient, nonlinear models with data from exponential families.

Page generated in 0.063 seconds