• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 88
  • 41
  • 22
  • 10
  • 9
  • 7
  • 3
  • 3
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 230
  • 26
  • 23
  • 19
  • 19
  • 17
  • 16
  • 16
  • 15
  • 14
  • 13
  • 13
  • 12
  • 12
  • 11
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
131

Contrast sensitivity and glare: new measurement techniques and the visual consequences of wearing head-mounted displays

Longley, Christopher I. January 2016 (has links)
The main aim of this thesis was to evaluate the performance of the contrast sensitivity clock (CSC), a new screening device for measuring contrast sensitivity (CS) and glare. This device allows CS without glare, with glare and disability glare scores to be recorded. After initial data collection the design of the CSC was slightly amended improving the performance of the device. The amended design of the CSC was shown to be a valid, discriminative and repeatable measure for purpose. The CSC is also a quick test to perform and is relatively cheap to produce. If all these factors are considered it shows potential to become the test of choice for the assessment of visual glare. A head-mounted display system was also evaluated in terms of the glare effects it may cause. The monocular display screen of the device significantly reduced the CS of the eye directly exposed but also had an effect on binocular performance, reducing amounts of binocular summation. Electronic devices, including head-mounted displays and satellite navigation systems can seriously affect CS at low luminance levels, similar to those found when driving at night.
132

An ABAQUS Implementation of the Cell-based Smoothed Finite Element Method Using Quadrilateral Elements

Wang, Sili January 2014 (has links)
No description available.
133

Sparse Processing Methodologies Based on Compressive Sensing for Directions of Arrival Estimation

Hannan, Mohammad Abdul 29 October 2020 (has links)
In this dissertation, sparse processing of signals for directions-of-arrival (DoAs) estimation is addressed in the framework of Compressive Sensing (CS). In particular, DoAs estimation problem for different types of sources, systems, and applications are formulated in the CS paradigm. In addition, the fundamental conditions related to the ``Sparsity'' and ``Linearity'' are carefully exploited in order to apply confidently the CS-based methodologies. Moreover, innovative strategies for various systems and applications are developed, validated numerically, and analyzed extensively for different scenarios including signal to noise ratio (SNR), mutual coupling, and polarization loss. The more realistic data from electromagnetic (EM) simulators are often considered for various analysis to validate the potentialities of the proposed approaches. The performances of the proposed estimators are analyzed in terms of standard root-mean-square error (RMSE) with respect to different degrees-of-freedom (DoFs) of DoAs estimation problem including number of elements, number of signals, and signal properties. The outcomes reported in this thesis suggest that the proposed estimators are computationally efficient (i.e., appropriate for real time estimations), robust (i.e., appropriate for different heterogeneous scenarios), and versatile (i.e., easily adaptable for different systems).
134

Context-aware and secure workflow systems

Alotaibi, Hind January 2012 (has links)
Businesses do evolve. Their evolution necessitates the re-engineering of their existing "business processes”, with the objectives of reducing costs, delivering services on time, and enhancing their profitability in a competitive market. This is generally true and particularly in domains such as manufacturing, pharmaceuticals and education). The central objective of workflow technologies is to separate business policies (which normally are encoded in business logics) from the underlying business applications. Such a separation is desirable as it improves the evolution of business processes and, more often than not, facilitates the re-engineering at the organisation level without the need to detail knowledge or analyses of the application themselves. Workflow systems are currently used by many organisations with a wide range of interests and specialisations in many domains. These include, but not limited to, office automation, finance and banking sector, health-care, art, telecommunications, manufacturing and education. We take the view that a workflow is a set of "activities”, each performs a piece of functionality within a given "context” and may be constrained by some security requirements. These activities are coordinated to collectively achieve a required business objective. The specification of such coordination is presented as a set of "execution constraints” which include parallelisation (concurrency/distribution), serialisation, restriction, alternation, compensation and so on. Activities within workflows could be carried out by humans, various software based application programs, or processing entities according to the organisational rules, such as meeting deadlines or performance improvement. Workflow execution can involve a large number of different participants, services and devices which may cross the boundaries of various organisations and accessing variety of data. This raises the importance of _ context variations and context-awareness and _ security (e.g. access control and privacy). The specification of precise rules, which prevent unauthorised participants from executing sensitive tasks and also to prevent tasks from accessing unauthorised services or (commercially) sensitive information, are crucially important. For example, medical scenarios will require that: _ only authorised doctors are permitted to perform certain tasks, _ a patient medical records are not allowed to be accessed by anyone without the patient consent and _ that only specific machines are used to perform given tasks at a given time. If a workflow execution cannot guarantee these requirements, then the flow will be rejected. Furthermore, features/characteristics of security requirement are both temporal- and/or event-related. However, most of the existing models are of a static nature – for example, it is hard, if not impossible, to express security requirements which are: _ time-dependent (e.g. A customer is allowed to be overdrawn by 100 pounds only up-to the first week of every month. _ event-dependent (e.g. A bank account can only be manipulated by its owner unless there is a change in the law or after six months of his/her death). Currently, there is no commonly accepted model for secure and context-aware workflows or even a common agreement on which features a workflow security model should support. We have developed a novel approach to design, analyse and validate workflows. The approach has the following components: = A modelling/design language (known as CS-Flow). The language has the following features: – support concurrency; – context and context awareness are first-class citizens; – supports mobility as activities can move from one context to another; – has the ability to express timing constrains: delay, deadlines, priority and schedulability; – allows the expressibility of security policies (e.g. access control and privacy) without the need for extra linguistic complexities; and – enjoy sound formal semantics that allows us to animate designs and compare various designs. = An approach known as communication-closed layer is developed, that allows us to serialise a highly distributed workflow to produce a semantically equivalent quasi-sequential flow which is easier to understand and analyse. Such re-structuring, gives us a mechanism to design fault-tolerant workflows as layers are atomic activities and various existing forward and backward error recovery techniques can be deployed. = Provide a reduction semantics to CS-Flow that allows us to build a tool support to animate a specifications and designs. This has been evaluated on a Health care scenario, namely the Context Aware Ward (CAW) system. Health care provides huge amounts of business workflows, which will benefit from workflow adaptation and support through pervasive computing systems. The evaluation takes two complementary strands: – provide CS-Flow’s models and specifications and – formal verification of time-critical component of a workflow.
135

Single photon generation and quantum computing with integrated photonics

Spring, Justin Benjamin January 2014 (has links)
Photonics has consistently played an important role in the investigation of quantum-enhanced technologies and the corresponding study of fundamental quantum phenomena. The majority of these experiments have relied on the free space propagation of light between bulk optical components. This relatively simple and flexible approach often provides the fastest route to small proof-of-principle demonstrations. Unfortunately, such experiments occupy significant space, are not inherently phase stable, and can exhibit significant scattering loss which severely limits their use. Integrated photonics offers a scalable route to building larger quantum states of light by surmounting these barriers. In the first half of this thesis, we describe the operation of on-chip heralded sources of single photons. Loss plays a critical role in determining whether many quantum technologies have any hope of outperforming their classical analogues. Minimizing loss leads us to choose Spontaneous Four-Wave Mixing (SFWM) in a silica waveguide for our source design; silica exhibits extremely low scattering loss and emission can be efficiently coupled to the silica chips and fibers that are widely used in quantum optics experiments. We show there is a straightforward route to maximizing heralded photon purity by minimizing the spectral correlations between emitted photon pairs. Fabrication of identical sources on a large scale is demonstrated by a series of high-visibility interference experiments. This architecture offers a promising route to the construction of nonclassical states of higher photon number by operating many on-chip SFWM sources in parallel. In the second half, we detail one of the first proof-of-principle demonstrations of a new intermediate model of quantum computation called boson sampling. While likely less powerful than a universal quantum computer, boson sampling machines appear significantly easier to build and may allow the first convincing demonstration of a quantum-enhanced computation in the not-distant future. Boson sampling requires a large interferometric network which are challenging to build with bulk optics, we therefore perform our experiment on-chip. We model the effect of loss on our postselected experiment and implement a circuit characterization technique that accounts for this loss. Experimental imperfections, including higher-order emission from our photon pair sources and photon distinguishability, are modeled and found to explain the sampling error observed in our experiment.
136

The mathematical structure of non-locality and contextuality

Mansfield, Shane January 2013 (has links)
Non-locality and contextuality are key features of quantum mechanics that distinguish it from classical physics. We aim to develop a deeper, more structural understanding of these phenomena, underpinned by robust and elegant mathematical theory with a view to providing clarity and new perspectives on conceptual and foundational issues. A general framework for logical non-locality is introduced and used to prove that 'Hardy's paradox' is complete for logical non-locality in all (2,2,l) and (2,k,2) Bell scenarios, a consequence of which is that Bell states are the only entangled two-qubit states that are not logically non-local, and that Hardy non-locality can be witnessed with certainty in a tripartite quantum system. A number of developments of the unified sheaf-theoretic approach to non-locality and contextuality are considered, including the first application of cohomology as a tool for studying the phenomena: we find cohomological witnesses corresponding to many of the classic no-go results, and completely characterise contextuality for large families of Kochen-Specker-like models. A connection with the problem of the existence of perfect matchings in k-uniform hypergraphs is explored, leading to new results on the complexity of deciding contextuality. A refinement of the sheaf-theoretic approach is found that captures partial approximations to locality/non-contextuality and can allow Bell models to be constructed from models of more general kinds which are equivalent in terms of non-locality/contextuality. Progress is made on bringing recent results on the nature of the wavefunction within the scope of the logical and sheaf-theoretic methods. Computational tools are developed for quantifying contextuality and finding generalised Bell inequalities for any measurement scenario which complement the research programme. This also leads to a proof that local ontological models with `negative probabilities' generate the no-signalling polytopes for all Bell scenarios.
137

Analog and Digital Approaches to UWB Narrowband Interference Cancellation

Omid, Abedi 02 October 2012 (has links)
Ultra wide band (UWB) is an extremely promising wireless technology for researchers and industrials. One of the most interesting is its high data rate and fading robustness due to selective frequency fading. However, beside such advantages, UWB system performance is highly affected by existing narrowband interference (NBI), undesired UWB signals and tone/multi-tone noises. For this reason, research about NBI cancellation is still a challenge to improve the system performance vs. receiver complexity, power consumption, linearity, etc. In this work, the two major receiver sections, i.e., analog (radiofrequency or RF) and digital (digital signal processing or DSP), were considered and new techniques proposed to reduce circuit complexity and power consumption, while improving signal parameters. In the RF section, different multiband UWB low-noise amplifier key design parameters were investigated like circuit configuration, input matching and desired/undesired frequency band filtering, highlighting the most suitable filtering package for efficient UWB NBI cancellation. In the DSP section, due to pulse transmitter signals, different issues like modulation type and level, pulse variety, shape and color noise/tone noise assumptions, were addressed for efficient NBI cancelation. A comparison was performed in terms of bit-error rate, signal-to-interference ratio, signal-to-noise ratio, and channel capacity to highlight the most suitable parameters for efficient DSP design. The optimum number of filters that allows the filter bandwidth to be reduced by following the required low sampling rate and thus improving the system bit error rate was also investigated.
138

Aqueous and solid phase interactions of radionuclides with organic complexing agents

Reinoso-Maset, Estela January 2010 (has links)
Characterising the geochemistry and speciation of major contaminant radionuclides is crucial in order to understand their behaviour and migration in complex environmental systems. Organic complexing agents used in nuclear decontamination have been found to enhance migration of radionuclides at contaminated sites; however, the mechanisms of the interactions in complex environments are poorly understood. In this work, radionuclide speciation and sorption behaviour were investigated in order to identify interactions between four key radionuclides with different oxidation states (Cs(I) and Sr(II) as important fission products; Th(IV) and U(VI) as representative actinides), three anthropogenic organic complexing agents with different denticities (EDTA, NTA and picolinic acid as common co-contaminants), and natural sand (as simple environmental solid phase). A UV spectrophotometric and an IC method were developed to monitor the behaviour of EDTA, NTA and picolinic acid in the later experiments. The optimised methods were simple, applied widely-available instrumentation and achieved the necessary analytical figures of merit to allow a compound specific determination over variable background levels of DOC and in the presence of natural cations, anions and radionuclides. The effect of the ligands on the solubility of the radionuclides was studied using a natural sand matrix and pure silica for comparison of anions, cations and organic carbon. In the silica system, the presence of EDTA, NTA and, to a lesser extent, picolinic acid, showed a clear net effect of increasing Th and U solubility. Conversely, in the sand system, the sorption of Th and U was kinetically controlled and radionuclide complexation by the ligands enhanced the rate of sorption, by a mechanism identified as metal exchange with matrix metals. Experiments in which excess EDTA, NTA and picolinic acid (40 – 100 fold excess) were pre-equilibrated with Th and U prior to contact with the sand, to allow a greater degree of radionuclide complex formation, resulted in enhanced rates of sorption. This confirmed that the radionuclide complexes interacted with the sand surface more readily than uncomplexed Th or U. Overall this shows that Th and U mobility would be lowered in this natural sand by the presence of organic co-contaminants. In contrast, the complexation of Sr with the complexing agents was rapid and the effect of the ligands was observed as a net increase on Sr solubility (EDTA, picolinic acid) or sorption (NTA). As expected, Cs did not interact with the ligands, and showed rapid sorption kinetics. Finally, ESI-MS was used to study competitive interactions in the aqueous Th-Mn-ligand ternary system. Quantification presented a challenge, however, the careful approach taken to determine the signal correction allowed the competitive interactions between Mn and Th for EDTA to be studied semi-quantitatively. In an EDTA limited system, Th displaced Mn from the EDTA complex, even in the presence of a higher Mn concentration, which was consistent with the higher stability constant of the Th-EDTA complex.
139

Pictures of processes : automated graph rewriting for monoidal categories and applications to quantum computing

Kissinger, Aleks January 2011 (has links)
This work is about diagrammatic languages, how they can be represented, and what they in turn can be used to represent. More specifically, it focuses on representations and applications of string diagrams. String diagrams are used to represent a collection of processes, depicted as "boxes" with multiple (typed) inputs and outputs, depicted as "wires". If we allow plugging input and output wires together, we can intuitively represent complex compositions of processes, formalised as morphisms in a monoidal category. While string diagrams are very intuitive, existing methods for defining them rigorously rely on topological notions that do not extend naturally to automated computation. The first major contribution of this dissertation is the introduction of a discretised version of a string diagram called a string graph. String graphs form a partial adhesive category, so they can be manipulated using double-pushout graph rewriting. Furthermore, we show how string graphs modulo a rewrite system can be used to construct free symmetric traced and compact closed categories on a monoidal signature. The second contribution is in the application of graphical languages to quantum information theory. We use a mixture of diagrammatic and algebraic techniques to prove a new classification result for strongly complementary observables. Namely, maximal sets of strongly complementary observables of dimension D must be of size no larger than 2, and are in 1-to-1 correspondence with the Abelian groups of order D. We also introduce a graphical language for multipartite entanglement and illustrate a simple graphical axiom that distinguishes the two maximally-entangled tripartite qubit states: GHZ and W. Notably, we illustrate how the algebraic structures induced by these operations correspond to the (partial) arithmetic operations of addition and multiplication on the complex projective line. The third contribution is a description of two software tools developed in part by the author to implement much of the theoretical content described here. The first tool is Quantomatic, a desktop application for building string graphs and graphical theories, as well as performing automated graph rewriting visually. The second is QuantoCoSy, which performs fully automated, model-driven theory creation using a procedure called conjecture synthesis.
140

Neue Methoden zur Entdeckung von Fehlspezifikation bei Latent-Trait-Modellen der Veränderungsmessung

Klein, Stefan 09 May 2003 (has links)
Ziel der Arbeit ist die Entwicklung von Modellen zur Entdeckung von Fehlspezifikation im Linear Logistic Test Model ( = LLTM) und verwandten Modellen der Verände\-rungs\-mes\-sung. Fehlspezifikation bedeutet hierbei, dass dem Modell ein unzutreffendes Muster latenter Traits zugrundegelegt wurde. Dies kann, vgl. z.B. [Baker,1993], zu bedeutenden Schätzfehlern führen. Die hier vorgestellten Methoden ermöglichen es unter leicht zu erfüllenden Annahmen, Aussagen über das Ausmaß der Unkorrektheit der verwendeten Modellspezifikation zu machen, ohne die in der Modellschätzung bestimmten Parameterwerte verwenden zu müssen. Zunächst wird eine auf dem Mantel-Haenszel-Test beruhende Methodik vorgestellt, die bei Tests bezüglich der Veränderungsparameter eines LLTMs als direkte Konkurrenz zu den bekannten Likelihood-Ratio-Tests für das LLTM anzusehen ist, wie sie z.B. bei [Fischer,1995a] vorgestellt werden. Weiterhin werden für das LLTM optimierte Personenfittests und daraus abgeleitete Effektgrößen vorgestellt. Diese ermöglichen das Auffinden von Subpopulationen, bei denen eine Abweichung zum angenommenen Modell aufgetreten ist. Es werden die statistischen Eigenschaften dieser Tests resp. Effektgrößen mittels Simulation und Teststärkeberechnung untersucht und Anwendungsbeispiele für diese Methoden vorgestellt. / In this thesis, new methods are developed for the detection of misspecification within Linear Logistic Test Models (=LLTM) and similar model classes for the measurement of change. The phrase "misspecification" will be used if a wrong selection of latent traits is chosen for the estimation of the LLTM. Misspecification can lead to erronious estimation [Baker,1993]). Using the newly developed methods, it is possible to measure the extent of deviations between the proposed model and the data. This can be done without using estimated parameter values. First a method is introduced which is based on the well-known Mantel-Haenszel-test. For some hypotheses, this method can be used instead of a Likelihood Ratio Test (e.g. [Fischer,1995a]). The Main topic of this thesis are uniformly most powerful tests for the measurement of person fit and related effect measures. These effect measures can be used for the identification of subpopulations where the proposed model does not hold. Statistical properties of these tests resp. effect measures are examined by simulations and power calculations using the SAS software. Furthermore, examples of the application of these methods are given.

Page generated in 0.0379 seconds