• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 89
  • 41
  • 22
  • 10
  • 10
  • 7
  • 3
  • 3
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 232
  • 26
  • 23
  • 19
  • 19
  • 17
  • 16
  • 16
  • 15
  • 14
  • 13
  • 13
  • 12
  • 12
  • 11
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
131

Koncepční návrh dvoumístného lehkého monitorovacího letounu / Conceptual study of light two-seater surveillance aircraft

Kaňák, Ondřej January 2012 (has links)
This master’s thesis deals with the conceptual design of light surveillance aircraft. It involves modifying of VUT 001 Marabu according to the new requirements, which has been mainly brought by new definition of typical flight. Also there is solved design and geometry surfaces of new aircraft. It contains design and computation of stand for electro-optical monitoring device. Last but not least are calculated important flight performances.
132

A Curriculum Development for 21st Century Learners: Using Project Based Learning toTeach the Four Cs Required for Today and Tomorrow's Workforce

Sheppard, Sarah 28 March 2022 (has links)
No description available.
133

Contrast sensitivity and glare: new measurement techniques and the visual consequences of wearing head-mounted displays

Longley, Christopher I. January 2016 (has links)
The main aim of this thesis was to evaluate the performance of the contrast sensitivity clock (CSC), a new screening device for measuring contrast sensitivity (CS) and glare. This device allows CS without glare, with glare and disability glare scores to be recorded. After initial data collection the design of the CSC was slightly amended improving the performance of the device. The amended design of the CSC was shown to be a valid, discriminative and repeatable measure for purpose. The CSC is also a quick test to perform and is relatively cheap to produce. If all these factors are considered it shows potential to become the test of choice for the assessment of visual glare. A head-mounted display system was also evaluated in terms of the glare effects it may cause. The monocular display screen of the device significantly reduced the CS of the eye directly exposed but also had an effect on binocular performance, reducing amounts of binocular summation. Electronic devices, including head-mounted displays and satellite navigation systems can seriously affect CS at low luminance levels, similar to those found when driving at night.
134

An ABAQUS Implementation of the Cell-based Smoothed Finite Element Method Using Quadrilateral Elements

Wang, Sili January 2014 (has links)
No description available.
135

Sparse Processing Methodologies Based on Compressive Sensing for Directions of Arrival Estimation

Hannan, Mohammad Abdul 29 October 2020 (has links)
In this dissertation, sparse processing of signals for directions-of-arrival (DoAs) estimation is addressed in the framework of Compressive Sensing (CS). In particular, DoAs estimation problem for different types of sources, systems, and applications are formulated in the CS paradigm. In addition, the fundamental conditions related to the ``Sparsity'' and ``Linearity'' are carefully exploited in order to apply confidently the CS-based methodologies. Moreover, innovative strategies for various systems and applications are developed, validated numerically, and analyzed extensively for different scenarios including signal to noise ratio (SNR), mutual coupling, and polarization loss. The more realistic data from electromagnetic (EM) simulators are often considered for various analysis to validate the potentialities of the proposed approaches. The performances of the proposed estimators are analyzed in terms of standard root-mean-square error (RMSE) with respect to different degrees-of-freedom (DoFs) of DoAs estimation problem including number of elements, number of signals, and signal properties. The outcomes reported in this thesis suggest that the proposed estimators are computationally efficient (i.e., appropriate for real time estimations), robust (i.e., appropriate for different heterogeneous scenarios), and versatile (i.e., easily adaptable for different systems).
136

Context-aware and secure workflow systems

Alotaibi, Hind January 2012 (has links)
Businesses do evolve. Their evolution necessitates the re-engineering of their existing "business processes”, with the objectives of reducing costs, delivering services on time, and enhancing their profitability in a competitive market. This is generally true and particularly in domains such as manufacturing, pharmaceuticals and education). The central objective of workflow technologies is to separate business policies (which normally are encoded in business logics) from the underlying business applications. Such a separation is desirable as it improves the evolution of business processes and, more often than not, facilitates the re-engineering at the organisation level without the need to detail knowledge or analyses of the application themselves. Workflow systems are currently used by many organisations with a wide range of interests and specialisations in many domains. These include, but not limited to, office automation, finance and banking sector, health-care, art, telecommunications, manufacturing and education. We take the view that a workflow is a set of "activities”, each performs a piece of functionality within a given "context” and may be constrained by some security requirements. These activities are coordinated to collectively achieve a required business objective. The specification of such coordination is presented as a set of "execution constraints” which include parallelisation (concurrency/distribution), serialisation, restriction, alternation, compensation and so on. Activities within workflows could be carried out by humans, various software based application programs, or processing entities according to the organisational rules, such as meeting deadlines or performance improvement. Workflow execution can involve a large number of different participants, services and devices which may cross the boundaries of various organisations and accessing variety of data. This raises the importance of _ context variations and context-awareness and _ security (e.g. access control and privacy). The specification of precise rules, which prevent unauthorised participants from executing sensitive tasks and also to prevent tasks from accessing unauthorised services or (commercially) sensitive information, are crucially important. For example, medical scenarios will require that: _ only authorised doctors are permitted to perform certain tasks, _ a patient medical records are not allowed to be accessed by anyone without the patient consent and _ that only specific machines are used to perform given tasks at a given time. If a workflow execution cannot guarantee these requirements, then the flow will be rejected. Furthermore, features/characteristics of security requirement are both temporal- and/or event-related. However, most of the existing models are of a static nature – for example, it is hard, if not impossible, to express security requirements which are: _ time-dependent (e.g. A customer is allowed to be overdrawn by 100 pounds only up-to the first week of every month. _ event-dependent (e.g. A bank account can only be manipulated by its owner unless there is a change in the law or after six months of his/her death). Currently, there is no commonly accepted model for secure and context-aware workflows or even a common agreement on which features a workflow security model should support. We have developed a novel approach to design, analyse and validate workflows. The approach has the following components: = A modelling/design language (known as CS-Flow). The language has the following features: – support concurrency; – context and context awareness are first-class citizens; – supports mobility as activities can move from one context to another; – has the ability to express timing constrains: delay, deadlines, priority and schedulability; – allows the expressibility of security policies (e.g. access control and privacy) without the need for extra linguistic complexities; and – enjoy sound formal semantics that allows us to animate designs and compare various designs. = An approach known as communication-closed layer is developed, that allows us to serialise a highly distributed workflow to produce a semantically equivalent quasi-sequential flow which is easier to understand and analyse. Such re-structuring, gives us a mechanism to design fault-tolerant workflows as layers are atomic activities and various existing forward and backward error recovery techniques can be deployed. = Provide a reduction semantics to CS-Flow that allows us to build a tool support to animate a specifications and designs. This has been evaluated on a Health care scenario, namely the Context Aware Ward (CAW) system. Health care provides huge amounts of business workflows, which will benefit from workflow adaptation and support through pervasive computing systems. The evaluation takes two complementary strands: – provide CS-Flow’s models and specifications and – formal verification of time-critical component of a workflow.
137

Single photon generation and quantum computing with integrated photonics

Spring, Justin Benjamin January 2014 (has links)
Photonics has consistently played an important role in the investigation of quantum-enhanced technologies and the corresponding study of fundamental quantum phenomena. The majority of these experiments have relied on the free space propagation of light between bulk optical components. This relatively simple and flexible approach often provides the fastest route to small proof-of-principle demonstrations. Unfortunately, such experiments occupy significant space, are not inherently phase stable, and can exhibit significant scattering loss which severely limits their use. Integrated photonics offers a scalable route to building larger quantum states of light by surmounting these barriers. In the first half of this thesis, we describe the operation of on-chip heralded sources of single photons. Loss plays a critical role in determining whether many quantum technologies have any hope of outperforming their classical analogues. Minimizing loss leads us to choose Spontaneous Four-Wave Mixing (SFWM) in a silica waveguide for our source design; silica exhibits extremely low scattering loss and emission can be efficiently coupled to the silica chips and fibers that are widely used in quantum optics experiments. We show there is a straightforward route to maximizing heralded photon purity by minimizing the spectral correlations between emitted photon pairs. Fabrication of identical sources on a large scale is demonstrated by a series of high-visibility interference experiments. This architecture offers a promising route to the construction of nonclassical states of higher photon number by operating many on-chip SFWM sources in parallel. In the second half, we detail one of the first proof-of-principle demonstrations of a new intermediate model of quantum computation called boson sampling. While likely less powerful than a universal quantum computer, boson sampling machines appear significantly easier to build and may allow the first convincing demonstration of a quantum-enhanced computation in the not-distant future. Boson sampling requires a large interferometric network which are challenging to build with bulk optics, we therefore perform our experiment on-chip. We model the effect of loss on our postselected experiment and implement a circuit characterization technique that accounts for this loss. Experimental imperfections, including higher-order emission from our photon pair sources and photon distinguishability, are modeled and found to explain the sampling error observed in our experiment.
138

The mathematical structure of non-locality and contextuality

Mansfield, Shane January 2013 (has links)
Non-locality and contextuality are key features of quantum mechanics that distinguish it from classical physics. We aim to develop a deeper, more structural understanding of these phenomena, underpinned by robust and elegant mathematical theory with a view to providing clarity and new perspectives on conceptual and foundational issues. A general framework for logical non-locality is introduced and used to prove that 'Hardy's paradox' is complete for logical non-locality in all (2,2,l) and (2,k,2) Bell scenarios, a consequence of which is that Bell states are the only entangled two-qubit states that are not logically non-local, and that Hardy non-locality can be witnessed with certainty in a tripartite quantum system. A number of developments of the unified sheaf-theoretic approach to non-locality and contextuality are considered, including the first application of cohomology as a tool for studying the phenomena: we find cohomological witnesses corresponding to many of the classic no-go results, and completely characterise contextuality for large families of Kochen-Specker-like models. A connection with the problem of the existence of perfect matchings in k-uniform hypergraphs is explored, leading to new results on the complexity of deciding contextuality. A refinement of the sheaf-theoretic approach is found that captures partial approximations to locality/non-contextuality and can allow Bell models to be constructed from models of more general kinds which are equivalent in terms of non-locality/contextuality. Progress is made on bringing recent results on the nature of the wavefunction within the scope of the logical and sheaf-theoretic methods. Computational tools are developed for quantifying contextuality and finding generalised Bell inequalities for any measurement scenario which complement the research programme. This also leads to a proof that local ontological models with `negative probabilities' generate the no-signalling polytopes for all Bell scenarios.
139

Analog and Digital Approaches to UWB Narrowband Interference Cancellation

Omid, Abedi 02 October 2012 (has links)
Ultra wide band (UWB) is an extremely promising wireless technology for researchers and industrials. One of the most interesting is its high data rate and fading robustness due to selective frequency fading. However, beside such advantages, UWB system performance is highly affected by existing narrowband interference (NBI), undesired UWB signals and tone/multi-tone noises. For this reason, research about NBI cancellation is still a challenge to improve the system performance vs. receiver complexity, power consumption, linearity, etc. In this work, the two major receiver sections, i.e., analog (radiofrequency or RF) and digital (digital signal processing or DSP), were considered and new techniques proposed to reduce circuit complexity and power consumption, while improving signal parameters. In the RF section, different multiband UWB low-noise amplifier key design parameters were investigated like circuit configuration, input matching and desired/undesired frequency band filtering, highlighting the most suitable filtering package for efficient UWB NBI cancellation. In the DSP section, due to pulse transmitter signals, different issues like modulation type and level, pulse variety, shape and color noise/tone noise assumptions, were addressed for efficient NBI cancelation. A comparison was performed in terms of bit-error rate, signal-to-interference ratio, signal-to-noise ratio, and channel capacity to highlight the most suitable parameters for efficient DSP design. The optimum number of filters that allows the filter bandwidth to be reduced by following the required low sampling rate and thus improving the system bit error rate was also investigated.
140

Aqueous and solid phase interactions of radionuclides with organic complexing agents

Reinoso-Maset, Estela January 2010 (has links)
Characterising the geochemistry and speciation of major contaminant radionuclides is crucial in order to understand their behaviour and migration in complex environmental systems. Organic complexing agents used in nuclear decontamination have been found to enhance migration of radionuclides at contaminated sites; however, the mechanisms of the interactions in complex environments are poorly understood. In this work, radionuclide speciation and sorption behaviour were investigated in order to identify interactions between four key radionuclides with different oxidation states (Cs(I) and Sr(II) as important fission products; Th(IV) and U(VI) as representative actinides), three anthropogenic organic complexing agents with different denticities (EDTA, NTA and picolinic acid as common co-contaminants), and natural sand (as simple environmental solid phase). A UV spectrophotometric and an IC method were developed to monitor the behaviour of EDTA, NTA and picolinic acid in the later experiments. The optimised methods were simple, applied widely-available instrumentation and achieved the necessary analytical figures of merit to allow a compound specific determination over variable background levels of DOC and in the presence of natural cations, anions and radionuclides. The effect of the ligands on the solubility of the radionuclides was studied using a natural sand matrix and pure silica for comparison of anions, cations and organic carbon. In the silica system, the presence of EDTA, NTA and, to a lesser extent, picolinic acid, showed a clear net effect of increasing Th and U solubility. Conversely, in the sand system, the sorption of Th and U was kinetically controlled and radionuclide complexation by the ligands enhanced the rate of sorption, by a mechanism identified as metal exchange with matrix metals. Experiments in which excess EDTA, NTA and picolinic acid (40 – 100 fold excess) were pre-equilibrated with Th and U prior to contact with the sand, to allow a greater degree of radionuclide complex formation, resulted in enhanced rates of sorption. This confirmed that the radionuclide complexes interacted with the sand surface more readily than uncomplexed Th or U. Overall this shows that Th and U mobility would be lowered in this natural sand by the presence of organic co-contaminants. In contrast, the complexation of Sr with the complexing agents was rapid and the effect of the ligands was observed as a net increase on Sr solubility (EDTA, picolinic acid) or sorption (NTA). As expected, Cs did not interact with the ligands, and showed rapid sorption kinetics. Finally, ESI-MS was used to study competitive interactions in the aqueous Th-Mn-ligand ternary system. Quantification presented a challenge, however, the careful approach taken to determine the signal correction allowed the competitive interactions between Mn and Th for EDTA to be studied semi-quantitatively. In an EDTA limited system, Th displaced Mn from the EDTA complex, even in the presence of a higher Mn concentration, which was consistent with the higher stability constant of the Th-EDTA complex.

Page generated in 0.0259 seconds