• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 602
  • 268
  • 127
  • 64
  • 55
  • 21
  • 11
  • 9
  • 9
  • 7
  • 6
  • 4
  • 4
  • 4
  • 3
  • Tagged with
  • 1448
  • 231
  • 204
  • 190
  • 188
  • 142
  • 140
  • 134
  • 108
  • 101
  • 99
  • 98
  • 97
  • 94
  • 92
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
441

Waveguide Sources of Photon Pairs

Horn, Rolf January 2011 (has links)
This thesis describes various methods for producing photon pairs from waveguides. It covers relevant topics such as waveguide coupling and phase matching, along with the relevant measurement techniques used to infer photon pair production. A new proposal to solve the phase matching problem is described along with two conceptual methods for generating entangled photon pairs. Photon pairs are also experimentally demonstrated from a third novel structure called a Bragg Reflection Waveguide (BRW). The new proposal to solve the phase matching problem is called Directional Quasi-Phase Matching (DQPM). It is a technique that exploits the directional dependence of the non-linear susceptiblity ($\chi^{(2)}$) tensor. It is aimed at those materials that do not allow birefringent phase-matching or periodic poling. In particular, it focuses on waveguides in which the interplay between the propagation direction, electric field polarizations and the nonlinearity can change the strength and sign of the nonlinear interaction periodically to achieve quasi-phasematching. One of the new conceptual methods for generating entangled photon pairs involves a new technique that sandwiches two waveguides from two differently oriented but similar crystals together. The idea stems from the design of a Michelson interferometer which interferes the paths over which two unique photon pair processes can occur, thereby creating entanglement in any pair of photons created in the interferometer. By forcing or sandwiching the two waveguides together, the physical space that exists in the standard Micheleson type interferometer is made non-existent, and the interferometer is effectively squashed. The result is that the two unique photon pair processes actually occupy the same physical path. This benefits the stability of the interferometer in addition to miniaturizing it. The technical challenges involved in sandwiching the two waveguides are briefly discussed. The main result of this thesis is the observation of photon pairs from the BRW. By analyzing the time correlation between two single photon detection events, spontaneous parametric down conversion (SPDC) of a picosecond pulsed ti:sapph laser is demonstrated. The process is mediated by a ridge BRW. The results show evidence for type-0, type-I and type-II phase matching of pump light at 783nm, 786nm and 789nm to down converted light that is strongly degenerate at 1566nm, 1572nm, and 1578nm respectively. The inferred efficiency of the BRW was 9.8$\cdot$10$^{-9}$ photon pairs per pump photon. This contrasts with the predicted type-0 efficiency of 2.65$\cdot$10$^{-11}$. This data is presented for the first time in such waveguides, and represents significant advances towards the integration of sources of quantum information into the existing telecommunications infrastructure.
442

Improvement Of Computational Software For Composite Curved Bridge Analysis

Kalayci, Ahmet Serhat 01 February 2005 (has links) (PDF)
In highway bridge construction, composite curved girder bridges are becoming more popular recently. Reduced construction time, long span coverage, economics and aesthetics make them more popular than the other structural systems. Although there exist some methods for the analysis of such systems, each have shortcomings. The use of Finite Element Method (FEM) among these methods is limited except in the academic environments. The use of commercial FEM software packages in the analysis of such systems is cumbersome as it takes too much time to form a model. Considering such problems a computational software was developed called UTRAP in 2002 which analyzes bridges for construction loads by taking into account the early age deck concrete. As the topic of this thesis work, this program was restructured and new features were added. In the following thesis work, the program structure, modeling considerations and recommendations are discussed together with the parametric studies.
443

Design Of A Computer Interface For Automatic Finite Element Analysis Of An Excavator Boom

Yener, Mehmet 01 May 2005 (has links) (PDF)
The aim of this study is to design a computer interface, which links the user to commercial Finite Element Analysis (FEA) program, MSC.Marc-Mentat to make automatic FE analysis of an excavator boom by using DELPHI as platform. Parametrization of boom geometry is done to add some flexibility to interface called OPTIBOOM. Parametric FE analysis of a boom shortens the design stages and helps to find the optimum design in terms of stresses and mass.
444

Modeling and Control of Parametric Roll Resonance

Holden, Christian January 2011 (has links)
Parametric roll resonance is a dangerous resonance phenomenon affecting several types of ships, such as destroyers, RO-RO paxes, cruise ships, fishing vessels and especially container ships. Worst case, parametric roll is capable of causing roll angles of at least 50 degrees, and damage in the tens of millions of US dollars. Empirical and mathematical investigations have concluded that parametric roll occurs due to periodic changes in the waterplane area of the ship. If the vessel is sailing in longitudinal seas, with waves of approximately the same length as the ship, and encounter frequency of about twice the natural roll frequency, then parametric resonance can occur. While there is a significant amount of literature on the hydrodynamics of parametric roll, there is less on controlling and stopping the phenomenon through active control. The main goal of this thesis has been to develop controllers capable of stopping parametric roll. Two main results on control are presented. To derive, analyze and simulate the controllers, it proved necessary to develop novel models. The thesis thus contains four major contributions on modeling. The main results are (presented in order of appearance in the thesis): Six-DOF computer model for parametric roll One-DOF model of parametric roll for non-constant velocity Three-DOF model of parametric roll Seven-DOF model for ships with u-tanks of arbitrary shape Frequency detuning controller Active u-tank based controller for parametric roll
445

Applications of constrained non-parametric smoothing methods in computing financial risk

Wong, Chung To (Charles) January 2008 (has links)
The aim of this thesis is to improve risk measurement estimation by incorporating extra information in the form of constraint into completely non-parametric smoothing techniques. A similar approach has been applied in empirical likelihood analysis. The method of constraints incorporates bootstrap resampling techniques, in particular, biased bootstrap. This thesis brings together formal estimation methods, empirical information use, and computationally intensive methods. In this thesis, the constraint approach is applied to non-parametric smoothing estimators to improve the estimation or modelling of risk measures. We consider estimation of Value-at-Risk, of intraday volatility for market risk, and of recovery rate densities for credit risk management. Firstly, we study Value-at-Risk (VaR) and Expected Shortfall (ES) estimation. VaR and ES estimation are strongly related to quantile estimation. Hence, tail estimation is of interest in its own right. We employ constrained and unconstrained kernel density estimators to estimate tail distributions, and we estimate quantiles from the fitted tail distribution. The constrained kernel density estimator is an application of the biased bootstrap technique proposed by Hall & Presnell (1998). The estimator that we use for the constrained kernel estimator is the Harrell-Davis (H-D) quantile estimator. We calibrate the performance of the constrained and unconstrained kernel density estimators by estimating tail densities based on samples from Normal and Student-t distributions. We find a significant improvement in fitting heavy tail distributions using the constrained kernel estimator, when used in conjunction with the H-D quantile estimator. We also present an empirical study demonstrating VaR and ES calculation. A credit event in financial markets is defined as the event that a party fails to pay an obligation to another, and credit risk is defined as the measure of uncertainty of such events. Recovery rate, in the credit risk context, is the rate of recuperation when a credit event occurs. It is defined as Recovery rate = 1 - LGD, where LGD is the rate of loss given default. From this point of view, the recovery rate is a key element both for credit risk management and for pricing credit derivatives. Only the credit risk management is considered in this thesis. To avoid strong assumptions about the form of the recovery rate density in current approaches, we propose a non-parametric technique incorporating a mode constraint, with the adjusted Beta kernel employed to estimate the recovery density function. An encouraging result for the constrained Beta kernel estimator is illustrated by a large number of simulations, as genuine data are very confidential and difficult to obtain. Modelling high frequency data is a popular topic in contemporary finance. The intraday volatility patterns of standard indices and market-traded assets have been well documented in the literature. They show that the volatility patterns reflect the different characteristics of different stock markets, such as double U-shaped volatility pattern reported in the Hang Seng Index (HSI). We aim to capture this intraday volatility pattern using a non-parametric regression model. In particular, we propose a constrained function approximation technique to formally test the structure of the pattern and to approximate the location of the anti-mode of the U-shape. We illustrate this methodology on the HSI as an empirical example.
446

Electroacoustic Music With Moving Images: A Practice-Led Research Project

John Coulter Unknown Date (has links)
The folio of compositions and critical commentary documents a major practice-led research project that was carried out from 2003-09 on the topic of ‘electroacoustic music with moving images’. The written report analyses and expands on the creative works by supplying detailed information concerning the ‘process’ of composing for the genre, and the ‘language’ of audiovisual media pairing. Sixteen extracts of creative work featuring specific qualities of language are also provided as a means of focussing discussion points. The folio of compositions is comprised of four creative works: Shifting Ground (2005), Mouth Piece (2008), Abide With Me (2009), and Eyepiece (2009), which present a one-hour audiovisual programme. The series was premiered in a special concert Seeing With Ears: Video Works By John Coulter as part of the proceedings of the New Zealand Electroacoustic Music Symposium (NZEMS) 2-4 September 2009, School of Music, University of Auckland, New Zealand. Part 1 of the thesis seeks to illuminate a general process of creative practice that is relevant to all forms of studio-based composition. Three frameworks are examined: those that contain singular creative tasks, those that contain multiple tasks, and those that contain multiple creative projects. A 3-tiered model of reflective practice is then offered, and procedures common to all electroacoustic composers are discussed. The action research paradigm is then presented, followed by domain-specific guidelines for undertaking research. Key differences between ‘composing’ and ‘researching’ are examined, and principles of conducting practice and research simultaneously are submitted. For those working in studio-based settings, the study provides a model, and a vocabulary for discussing his/her creative process, as well as procedural guidelines for contributing to expert domain knowledge through practice-led research. Part 2 of the thesis directly addresses a common paradox faced by composers working with sounds and moving images. On one hand, audiovisual materials appear to offer the possibility of complementing one another - of forming a highly effective means of communicating artistic ideas, and on the other, they appear to carry the risk of detracting from one another – of deforming the musical language that he/she has worked so hard to create. The study seeks to transcend this paradox through the identification of audiovisual materials that function in different ways. Examples of creative work are offered to illustrate more general points of language, a model for classifying media pairs is put forward, and practical methods of audiovisual composition are proposed. The narrow findings of the study offer a vocabulary for discussing the functionality of audiovisual materials, detailed methods of media pairing and techniques of parametric alignment, while the wider findings extend to associated domains such as live electronic music, and hyper-instrument design. In summary, the study recognises both creative works and written works as knowledge-bearing documents. Succinctly stated, the essential research findings are presented and supported by both phenomenological and nominal means - through aspects of creative works that make themselves apparent during the listening process, and through retrospective logical enquiry.
447

Stepping stones towards linear optical quantum computing

Till Weinhold Unknown Date (has links)
The experiments described in this thesis form an investigation into the path towards establishing the requirements of quantum computing in a linear optical system. Our qubits are polarisation encoded photons for which the basic operations of quantum computing, single qubit rotations, are a well understood problem. The difficulty lies in the interaction of photons. To achieve these we use measurement induced non-linearities. The first experiment in this thesis describes the thorough characterisation of a controlled-sign gate based on such non-linearities. The photons are provided as pairs generated through parametric down-conversion, and as such share correlations unlikely to carry over into large scale implementations of the future. En route to such larger circuits, a characterisation of the actions of the controlled-sign gate is conducted, when the input qubits have been generated independently from each other, revealing a large drop in process fidelity. To explore the cause of this degradation of the gate performance a thorough and highly accurate model of the gate is derived including the realistic description of faulty circuitry, photon loss and multi-photon emission by the source. By simulating the effects of the various noise sources individually, the heretofore largely ignored multi-photon emission is identified as the prime cause of the degraded gate performance, causing a drop in fidelity nearly three times as large as any other error source. I further draw the first comparison between the performance of an experimental gate to the error probabilities per gate derived as thresholds for fault-tolerant quantum computing. In the absence of a single vigourous threshold value, I compare the gate performance to the models that yielded the highest threshold to date as an upper bound and to the threshold of the Gremlin-model, which allows for the most general errors. Unsurprisingly this comparison reveals that the implemented gate is clearly insufficient, however just remedying the multi-photon emission error will allow this architecture to move to within striking distance of the boundary for fault-tolerant quantum computing. The utilised methodology can be applied to any gate in any architecture and can, combined with a suitable model of the noise sources, become an important guide for developments required to achieve fault tolerant quantum computing. The final experiment on the path towards linear optical quantum computing is the demonstration of a pair of basic versions of Shor's algorithm which display the essential entanglement for the algorithm. The results again highlight the need for extensive measurements to reveal the fundamental quality of the implemented algorithm, which is not accessible with limited indicative measurements. In the second part of the thesis, I describe two experiments on other forms of entanglement by extending the actions of a Fock-State filter, a filter that is capable of attenuating single photon states stronger than multi-photon states, to produce entangled states. Furthermore this device can be used in conjunction with standard wave-plates to extend the range of operations possible on the bi-photonic qutrit space, showing that this setup suffices to produce any desired qutrit state, thereby giving access to new measurement capabilities and in the process creating and proving the first entanglement between a qubit and a qutrit.
448

Stepping stones towards linear optical quantum computing

Till Weinhold Unknown Date (has links)
The experiments described in this thesis form an investigation into the path towards establishing the requirements of quantum computing in a linear optical system. Our qubits are polarisation encoded photons for which the basic operations of quantum computing, single qubit rotations, are a well understood problem. The difficulty lies in the interaction of photons. To achieve these we use measurement induced non-linearities. The first experiment in this thesis describes the thorough characterisation of a controlled-sign gate based on such non-linearities. The photons are provided as pairs generated through parametric down-conversion, and as such share correlations unlikely to carry over into large scale implementations of the future. En route to such larger circuits, a characterisation of the actions of the controlled-sign gate is conducted, when the input qubits have been generated independently from each other, revealing a large drop in process fidelity. To explore the cause of this degradation of the gate performance a thorough and highly accurate model of the gate is derived including the realistic description of faulty circuitry, photon loss and multi-photon emission by the source. By simulating the effects of the various noise sources individually, the heretofore largely ignored multi-photon emission is identified as the prime cause of the degraded gate performance, causing a drop in fidelity nearly three times as large as any other error source. I further draw the first comparison between the performance of an experimental gate to the error probabilities per gate derived as thresholds for fault-tolerant quantum computing. In the absence of a single vigourous threshold value, I compare the gate performance to the models that yielded the highest threshold to date as an upper bound and to the threshold of the Gremlin-model, which allows for the most general errors. Unsurprisingly this comparison reveals that the implemented gate is clearly insufficient, however just remedying the multi-photon emission error will allow this architecture to move to within striking distance of the boundary for fault-tolerant quantum computing. The utilised methodology can be applied to any gate in any architecture and can, combined with a suitable model of the noise sources, become an important guide for developments required to achieve fault tolerant quantum computing. The final experiment on the path towards linear optical quantum computing is the demonstration of a pair of basic versions of Shor's algorithm which display the essential entanglement for the algorithm. The results again highlight the need for extensive measurements to reveal the fundamental quality of the implemented algorithm, which is not accessible with limited indicative measurements. In the second part of the thesis, I describe two experiments on other forms of entanglement by extending the actions of a Fock-State filter, a filter that is capable of attenuating single photon states stronger than multi-photon states, to produce entangled states. Furthermore this device can be used in conjunction with standard wave-plates to extend the range of operations possible on the bi-photonic qutrit space, showing that this setup suffices to produce any desired qutrit state, thereby giving access to new measurement capabilities and in the process creating and proving the first entanglement between a qubit and a qutrit.
449

Stepping stones towards linear optical quantum computing

Till Weinhold Unknown Date (has links)
The experiments described in this thesis form an investigation into the path towards establishing the requirements of quantum computing in a linear optical system. Our qubits are polarisation encoded photons for which the basic operations of quantum computing, single qubit rotations, are a well understood problem. The difficulty lies in the interaction of photons. To achieve these we use measurement induced non-linearities. The first experiment in this thesis describes the thorough characterisation of a controlled-sign gate based on such non-linearities. The photons are provided as pairs generated through parametric down-conversion, and as such share correlations unlikely to carry over into large scale implementations of the future. En route to such larger circuits, a characterisation of the actions of the controlled-sign gate is conducted, when the input qubits have been generated independently from each other, revealing a large drop in process fidelity. To explore the cause of this degradation of the gate performance a thorough and highly accurate model of the gate is derived including the realistic description of faulty circuitry, photon loss and multi-photon emission by the source. By simulating the effects of the various noise sources individually, the heretofore largely ignored multi-photon emission is identified as the prime cause of the degraded gate performance, causing a drop in fidelity nearly three times as large as any other error source. I further draw the first comparison between the performance of an experimental gate to the error probabilities per gate derived as thresholds for fault-tolerant quantum computing. In the absence of a single vigourous threshold value, I compare the gate performance to the models that yielded the highest threshold to date as an upper bound and to the threshold of the Gremlin-model, which allows for the most general errors. Unsurprisingly this comparison reveals that the implemented gate is clearly insufficient, however just remedying the multi-photon emission error will allow this architecture to move to within striking distance of the boundary for fault-tolerant quantum computing. The utilised methodology can be applied to any gate in any architecture and can, combined with a suitable model of the noise sources, become an important guide for developments required to achieve fault tolerant quantum computing. The final experiment on the path towards linear optical quantum computing is the demonstration of a pair of basic versions of Shor's algorithm which display the essential entanglement for the algorithm. The results again highlight the need for extensive measurements to reveal the fundamental quality of the implemented algorithm, which is not accessible with limited indicative measurements. In the second part of the thesis, I describe two experiments on other forms of entanglement by extending the actions of a Fock-State filter, a filter that is capable of attenuating single photon states stronger than multi-photon states, to produce entangled states. Furthermore this device can be used in conjunction with standard wave-plates to extend the range of operations possible on the bi-photonic qutrit space, showing that this setup suffices to produce any desired qutrit state, thereby giving access to new measurement capabilities and in the process creating and proving the first entanglement between a qubit and a qutrit.
450

Nonparametric Markov Random Field Models for Natural Texture Images

Paget, Rupert Unknown Date (has links)
The underlying aim of this research is to investigate the mathematical descriptions of homogeneous textures in digital images for the purpose of segmentation and recognition. The research covers the problem of testing these mathematical descriptions by using them to generate synthetic realisations of the homogeneous texture for subjective and analytical comparisons with the source texture from which they were derived. The application of this research is in analysing satellite or airborne images of the Earth's surface. In particular, Synthetic Aperture Radar (SAR) images often exhibit regions of homogeneous texture, which if segmented, could facilitate terrain classification. In this thesis we present noncausal, nonparametric, multiscale, Markov random field (MRF) models for recognising and synthesising texture. The models have the ability to capture the characteristics of, and to synthesise, a wide variety of textures, varying from the highly structured to the stochastic. For texture synthesis, we introduce our own novel multiscale approach incorporating a new concept of local annealing. This allows us to use large neighbourhood systems to model complex natural textures with high order statistical characteristics. The new multiscale texture synthesis algorithm also produces synthetic textures with few, if any, phase discontinuities. The power of our modelling technique is evident in that only a small source image is required to synthesise representative examples of the source texture, even when the texture contains long-range characteristics. We also show how the high-dimensional model of the texture may be modelled with lower dimensional statistics without compromising the integrity of the representation. We then show how these models -- which are able to capture most of the unique characteristics of a texture -- can be for the ``open-ended'' problem of recognising textures embedded in a scene containing previously unseen textures. Whilst this technique was developed for the practical application of recognising different terrain types from Synthetic Aperture Radar (SAR) images, it has applications in other image processing tasks requiring texture recognition.

Page generated in 0.0662 seconds