• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 722
  • 238
  • 238
  • 121
  • 67
  • 48
  • 21
  • 19
  • 13
  • 10
  • 9
  • 8
  • 8
  • 8
  • 7
  • Tagged with
  • 1774
  • 531
  • 473
  • 275
  • 184
  • 139
  • 137
  • 117
  • 117
  • 115
  • 114
  • 110
  • 107
  • 102
  • 102
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
251

Robust pricing and hedging beyond one marginal

Spoida, Peter January 2014 (has links)
The robust pricing and hedging approach in Mathematical Finance, pioneered by Hobson (1998), makes statements about non-traded derivative contracts by imposing very little assumptions about the underlying financial model but directly using information contained in traded options, typically call or put option prices. These prices are informative about marginal distributions of the asset. Mathematically, the theory of Skorokhod embeddings provides one possibility to approach robust problems. In this thesis we consider mostly robust pricing and hedging problems of Lookback options (options written on the terminal maximum of an asset) and Convex Vanilla Options (options written on the terminal value of an asset) and extend the analysis which is predominately found in the literature on robust problems by two features: Firstly, options with multiple maturities are available for trading (mathematically this corresponds to multiple marginal constraints) and secondly, restrictions on the total realized variance of asset trajectories are imposed. Probabilistically, in both cases, we develop new optimal solutions to the Skorokhod embedding problem. More precisely, in Part I we start by constructing an iterated Azema-Yor type embedding (a solution to the n-marginal Skorokhod embedding problem, see Chapter 2). Subsequently, its implications are presented in Chapter 3. From a Mathematical Finance perspective we obtain explicitly the optimal superhedging strategy for Barrier/Lookback options. From a probability theory perspective, we find the maximum maximum of a martingale which is constrained by finitely many intermediate marginal laws. Further, as a by-product, we discover a new class of martingale inequalities for the terminal maximum of a cadlag submartingale, see Chapter 4. These inequalities enable us to re-derive the sharp versions of Doob's inequalities. In Chapter 5 a different problem is solved. Motivated by the fact that in some markets both Vanilla and Barrier options with multiple maturities are traded, we characterize the set of market models in this case. In Part II we incorporate the restriction that the total realized variance of every asset trajectory is bounded by a constant. This has been previously suggested by Mykland (2000). We further assume that finitely many put options with one fixed maturity are traded. After introducing the general framework in Chapter 6, we analyse the associated robust pricing and hedging problem for convex Vanilla and Lookback options in Chapters 7 and 8. Robust pricing is achieved through construction of appropriate Root solutions to the Skorokhod embedding problem. Robust hedging and pathwise duality are obtained by a careful development of dynamic pathwise superhedging strategies. Further, we characterize existence of market models with a suitable notion of arbitrage.
252

On structural studies of high-density potassium and sodium

McBride, Emma Elizabeth January 2014 (has links)
The alkali elements at ambient conditions are well described by the nearly-free electron (NFE) model, yet show a remarkable departure from this “simple” behaviour with increasing pressure. Low-symmetry complex structures are observed in all, and anomalous melting has been observed in lithium (Li), sodium (Na), rubidium (Rb), and caesium (Cs). In this Thesis, static and dynamic compression techniques have been used to investigate the high-pressure high-temperature behaviour of the alkali elements potassium (K) and Na. Utilising diamond anvil pressure cells and external resistive heating, both in-air and in-vacuum, the melting curve of K has been determined to 24 GPa and 750 K, and is found to be remarkably similar to that of Na, but strikingly different to that reported previously. Furthermore, there is some evidence to suggest that a change in the compressibility of liquid-K occurs at lower pressures than the solid-solid phase transitions, perhaps indicating structural transitions occurring in the liquid phase, similar to those in the underlying solid. This could suggest a mechanism to explain the anomalous melting behaviour observed. Previous ab initio computational studies indicate that the unusual melting curve of Na arises due to structural and electronic transitions occurring in the liquid, mirroring those found in the underlying solid at higher pressures. The discovery that the melting curve of K is very similar to that of Na suggests that the same physical phenomena predicted for Na could be responsible for the high-pressure melting behaviour observed in K. The tI19 phase of K, observed above 20 GPa at 300 K, is a composite incommensurate host-guest structure consisting of 1D chains of guest atoms surrounded by a tetragonal host framework. Along the unique c-axis, the host and guest are incommensurate with each other. During the melting studies described above, it was observed that with increasing temperature, the weaker-bonded guest chains become more disordered while the host structure remains unchanged. To investigate and characterise this order-disorder transition, in situ synchrotron X-ray diffraction studies were conducted on single-crystal and quasi-single crystal samples of tI19-K. An order-disorder phase line has been mapped out to 50 GPa and 650 K. Perhaps the most striking departure from NFE behaviour in the alkali elements is observed in Na at pressures above 200 GPa where it transforms to a transparent electrical insulator. This phase is a so-called elemental “electride”, which may be thought of as being pseudo-ionically bonded. Electrides are predicted to exist in many elements, but at pressures far beyond the current capabilities of static pressure techniques. Utilising laser-driven quasi-isentropic compression techniques, dynamic compression experiments were performed on Na to see if it is possible to observe this electride phase under the timescales of dynamic compression experiment (ns). Optical velocimetry and reflectivity of the sample were measured directly to determine pressure and monitor the on-set of the transparent phase, respectively.
253

Dynamic Spillovers between Commodity and Currency Markets

Antonakakis, Nikolaos, Kizys, Renatas 01 March 2015 (has links) (PDF)
In this study, we examine the dynamic link between returns and volatility of commodities and currency markets. Based on weekly data over the period from January 6, 1987 to July 22, 2014, we find the following empirical regularities. First, our results suggest that the information contents of gold, silver, platinum, and the CHF/USD and GBP/USD exchange rates can help improve forecast accuracy of returns and volatilities of palladium, crude oil and the EUR/CHF and GBP/USD exchange rates. Second, gold (CHF/USD) is the dominant commodity (currency) transmitter of return and volatility spillovers to the remaining assets in our model. Third, the analysis of dynamic spillovers shows time{ and event{specific patterns. For instance, the dynamic spillover effects originating in gold and silver (platinum) returns and volatility intensified (degraded) in the period marked by the global financial crisis. After the global financial crisis, the net transmitting role of gold and silver (platinum) returns shocks weakened (strengthened), while the net transmitting role of gold, silver and platinum volatility shocks remained relatively high. Overall, our findings reveal that, while the static analysis clearly classifies the aforementioned variables into net transmitters and net receivers, the dynamic analysis denotes episodes wherein the role of transmitters and receivers of return (volatility) spillovers can be interrupted or even reversed. Hence, even if certain commonalities prevail in each identified category of commodities, such commonalities are time - and event - dependent. (authors' abstract)
254

Improving dynamic analysis with data flow analysis

Chang, Walter Chochen 26 October 2010 (has links)
Many challenges in software quality can be tackled with dynamic analysis. However, these techniques are often limited in their efficiency or scalability as they are often applied uniformly to an entire program. In this thesis, we show that dynamic program analysis can be made significantly more efficient and scalable by first performing a static data flow analysis so that the dynamic analysis can be selectively applied only to important parts of the program. We apply this general principle to the design and implementation of two different systems, one for runtime security policy enforcement and the other for software test input generation. For runtime security policy enforcement, we enforce user-defined policies using a dynamic data flow analysis that is more general and flexible than previous systems. Our system uses the user-defined policy to drive a static data flow analysis that identifies and instruments only the statements that may be involved in a security vulnerability, often eliminating the need to track most objects and greatly reducing the overhead. For taint analysis on a set of five server programs, the slowdown is only 0.65%, two orders of magnitude lower than previous taint tracking systems. Our system also has negligible overhead on file disclosure vulnerabilities, a problem that taint tracking cannot handle. For software test case generation, we introduce the idea of targeted testing, which focuses testing effort on select parts of the program instead of treating all program paths equally. Our “Bullseye” system uses a static analysis performed with respect to user-defined “interesting points” to steer the search down certain paths, thereby finding bugs faster. We also introduce a compiler transformation that allows symbolic execution to automatically perform boundary condition testing, revealing bugs that could be missed even if the correct path is tested. For our set of 9 benchmarks, Bullseye finds bugs an average of 2.5× faster than a conventional depth-first search and finds numerous bugs that DFS could not. In addition, our automated boundary condition testing transformation allows both Bullseye and depth-first search to find numerous bugs that they could not find before, even when all paths were explored. / text
255

NEW ULTRA-LIGHTWEIGHT STIFF PANELS FOR SPACE APERTURES

Black, Jonathan T. 01 January 2006 (has links)
Stiff, ultra-lightweight thermal-formed polyimide panels considered in this dissertation are examples of next generation gossamer structures that resolve some of the technology barriers of previous, membrane-dominated gossamer designs while maintaining their low mass and low stowage volume characteristics. The research involved statically and dynamically characterizing and modeling several of these panels to develop validated computer models which can be used to determine the effects of changing manufacturing parameters and scalability. Static characterization showed substantial local nonlinear behavior that was replicated by new physics-based finite element models, and global linear bending behavior that was modeled using classical shell finite elements incorporating effective properties in place of bulk material properties to represent the unique stiffening structure of these panels. Dynamic characterization was performed on individual panels using standard impact hammer and accelerometer testing, enabling successful extraction of several structural natural frequencies and mode shapes. Additionally, the three dimensional time history of the surface of the panels was rendered from video data, and temporal filters were applied to the data to examine the frequency content. These data were also correlated to the shell element numerical models. Overall, the research contributes to the total knowledge base of gossamer technologies, advances stiff panel-based structures toward space qualification, and demonstrates their potential for use in apertures and other spacecraft.
256

Improved estimation of pore connectivity and permeability in deepwater carbonates with the construction of multi-layer static and dynamic petrophysical models

Ferreira, Elton Luiz Diniz 09 October 2014 (has links)
A new method is presented here for petrophysical interpretation of heterogeneous carbonates using well logs and core data. Developing this new method was necessary because conventional evaluation methods tend to yield inaccurate predictions of pore connectivity and permeability in the studied field. Difficulties in the petrophysical evaluation of this field are related to shoulder-bed effects, presence of non-connected porosity, rock layers that are thinner than the vertical resolution of well-logging tools, and the effect of oil-base mud (OBM) invasion in the measurements. These problems give rise to uncommon measurements and rock properties, such as: (a) reservoir units contained within thinly bedded and laminated sequences, (b) very high apparent resistivity readings in the oil-bearing zone, (c) separation of apparent resistivity logs with different depths of investigation, (d) complex unimodal and bimodal transverse relaxation distributions of nuclear magnetic resonance (NMR) measurements, (e) reservoir units having total porosity of 0.02 to 0.26 and permeability between 0.001mD to 4.2D, (f) significant differences between total and sonic porosity, and (g) low and constant gamma-ray values. The interpretation method introduced in this thesis is based on the detection of layer boundaries and rock types from high-resolution well logs and on the estimation of layer-by-layer properties using numerical simulation of resistivity, nuclear, and NMR logs. Layer properties were iteratively adjusted until the available well logs were reproduced by numerical simulations. This method honors the reservoir geology and physics of the measurements while adjusting the layer properties; it reduces shoulder-bed effects on well logs, especially across thinly bedded and laminated sequences, thereby yielding improved estimates of interconnected porosity and permeability in rocks that have null mobile water saturation and that were invaded with OBM. Additionally, dynamic simulations of OBM invasion in free-water depth intervals were necessary to estimate permeability. It is found that NMR transverse relaxation measurements are effective for determining rock and fluid properties but are unreliable in the accurate calculation of porosity and permeability in thinly bedded and highly laminated depth sections. In addition, this thesis shows that low resistivity values are associated with the presence of microporosity, and high resistivity values are associated with the presence of interconnected and vuggy porosity. In some layers, a fraction of the vuggy porosity is associated with isolated pores, which does not contribute to fluid flow. An integrated evaluation using multiple measurements, including sonic logs, is therefore necessary to detect isolated porosity. After the correction and simulation, results show, on average, a 34% improvement between estimated and core-measured permeability. Closer agreement was not possible because of limitations in tool resolution and difficulty in obtaining a precise depth match between core and well-log measurements. / text
257

A comparison of helium dilution and plethysmography in measuring static lung volumes

Guldbrand, Anna January 2008 (has links)
<p>In order to examine the usefulness of the multi breath helium dilution method (MB) it was compared to the single breath helium dilution method (SB) and body plethysmography (BP). Residual volume (RV), total lung capacity (TLC) and vital capacity (VC) were measured in seventeen subjects with obstructive (11) or restrictive (6) lung disease and four normal subjects.</p><p>With information from professional literature and current periodicals, advantages and disadvantages with all three methods were compared. ANOVA and Student's t-test were performed on the measurement results.</p><p>The results of the statistical tests tell us there are differences among the methods in the group of obstructive patients. They also reveal a notable difference between the MB and SB methods when measuring the same parameter. In addition, it was noted that none of the existing sets of prediction equations fulfill the requirements established on high quality lung function testing.</p><p>Although a thorough evaluation of the reproducibility of the method is still required, it appears to be a viable alternative to body plethysmography. We claim that measuring the above mentioned static lung volumes with only the single breath helium dilution method cannot be considered a satisfactory practice.</p>
258

Impacts of liquefaction and lateral spreading on bridge pile foundations from the February 22nd 2011 Christchurch earthquake

Winkley, Anna Margaret Mathieson January 2013 (has links)
The Mw 6.2 February 22nd 2011 Christchurch earthquake (and others in the 2010-2011 Canterbury sequence) provided a unique opportunity to study the devastating effects of earthquakes first-hand and learn from them for future engineering applications. All major events in the Canterbury earthquake sequence caused widespread liquefaction throughout Christchurch’s eastern suburbs, particularly extensive and severe during the February 22nd event. Along large stretches of the Avon River banks (and to a lesser extent along the Heathcote) significant lateral spreading occurred, affecting bridges and the infrastructure they support. The first stage of this research involved conducting detailed field reconnaissance to document liquefaction and lateral spreading-induced damage to several case study bridges along the Avon River. The case study bridges cover a range of ages and construction types but all are reinforced concrete structures which have relatively short, stiff decks. These factors combined led to a characteristic deformation mechanism involving deck-pinning and abutment back-rotation with consequent damage to the abutment piles and slumping of the approaches. The second stage of the research involved using pseudo-static analysis, a simplified seismic modelling tool, to analyse two of the bridges. An advantage of pseudo-static analysis over more complicated modelling methods is that it uses conventional geotechnical data in its inputs, such as SPT blowcount and CPT cone resistance and local friction. Pseudo-static analysis can also be applied without excessive computational power or specialised knowledge, yet it has been shown to capture the basic mechanisms of pile behaviour. Single pile and whole bridge models were constructed for each bridge, and both cyclic and lateral spreading phases of loading were investigated. Parametric studies were carried out which varied the values of key parameters to identify their influence on pile response, and computed displacements and damages were compared with observations made in the field. It was shown that pseudo-static analysis was able to capture the characteristic damage mechanisms observed in the field, however the treatment of key parameters affecting pile response is of primary importance. Recommendations were made concerning the treatment of these governing parameters controlling pile response. In this way the future application of pseudo-static analysis as a tool for analysing and designing bridge pile foundations in liquefying and laterally spreading soils is enhanced.
259

Towards a Gold Standard for Points-to Analysis

Gutzmann, Tobias January 2010 (has links)
Points-to analysis is a static program analysis that computes reference informationfor a given input program. It serves as input to many client applicationsin optimizing compilers and software engineering tools. Unfortunately, the Gold Standard – i.e., the exact reference information for a given program– is impossible to compute automatically for all but trivial cases, and thus, little can been said about the accuracy of points-to analysis. This thesis aims at paving the way towards a Gold Standard for points-to analysis. For this, we discuss theoretical implications and practical challenges that occur when comparing results obtained by different points-to analyses. We also show ways to improve points-to analysis by different means, e.g., combining different analysis implementations, and a novel approach to path sensitivity. We support our theories with a number of experiments.
260

Auxiliary computations : a framework for a step-wise, non-disruptive introduction of static guarantees to untyped programs using partial evaluation techniques

Herhut, S. January 2010 (has links)
Type inference can be considered a form of partial evaluation that only evaluates a program with respect to its type annotations. Building on this key observation, this dissertation presents a uniform framework for expressing computation, its dynamic properties and corresponding static type information. By using a unified approach, the static phase divide between values and types is lifted. Instead, computations and properties can be freely assigned to the static or dynamic phase of computation. Even more, moving a property from one world to the other does not require any program modifications. This thesis builds a bridge between two worlds: That of statically typed languages and the dynamically typed world. The former is wanted for the offered static guarantees and detection of a range of defects. With the increasing power of type systems available, the kinds of errors that can be statically detected is growing, nearing the goal of proving overall program correctness from the program’s source code alone. However, such power does come for a price: Type systems are becoming more complex, restrictive and invasive, to the point where specifying type annotations becomes as complex as specifying the algorithm itself. Untyped languages, in contrast, may provide less static safety but they have simpler semantics and offer a higher flexibility. They allow programmers to express their ideas without worrying about provable correctness. Not surprisingly, untyped languages have a strong following when it comes to prototyping and rapid application development. Using the framework presented in this thesis, the programmer can have both: Prototyping applications using a dynamically typed approach and gradual refinement of prototypes into programs with static guarantees. Technically, this flexibility is achieved with the novel concept of auxiliary computations. Auxiliary computation are additional streams of computation. They model, next to the data’s computation, the computation of property of data. These streams thereby may depend on the actual data that is computed, as well as on further auxiliary computations. This expressiveness brings auxiliary computations into the domain of dependent types. Partial evaluation of auxiliary computations is used to infer static knowledge from auxiliary computations. Due to the interdependencies between auxiliary computations, evaluating only those parts of a program that contribute to a property is non trivial. A further contribution of this work is the use of demands on computations to narrow the extent of partial evaluation to a single property. An algorithm for demand inference is presented and the correctness of the inferred demands is shown.

Page generated in 0.1898 seconds