• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 867
  • 138
  • 20
  • Tagged with
  • 1025
  • 181
  • 169
  • 119
  • 109
  • 95
  • 73
  • 72
  • 71
  • 69
  • 66
  • 63
  • 58
  • 52
  • 50
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Contrasts in Thermal Dffusion and Heat Accumulation Effects in the Fabrication of Waveguides in Glasses using Variable Repetition Rate Femtosecond Laser

Eaton, Shane 31 July 2008 (has links)
A variable (0.2 to 5 MHz) repetition rate femtosecond laser was applied to delineate the role of thermal diffusion and heat accumulation effects in forming low-loss optical waveguides in borosilicate glass across a broad range of laser exposure conditions. For the first time, a transition from thermal diffusion-dominated transport at 200-kHz repetition rate to strong heat accumulation at 0.5 to 2 MHz was observed to drive significant variations in waveguide morphology, with rapidly increasing waveguide diameter that accurately followed a simple thermal diffusion model over all exposure variables tested. Amongst these strong thermal trends, a common exposure window of 200-mW average power and ~15-mm/s scan speed was discovered across the range of 200-kHz to 2-MHz repetition rates for minimizing insertion loss despite a 10-fold drop in laser pulse energy. Waveguide morphology and thermal modeling indicate that strong thermal diffusion effects at 200 kHz give way to a weak heat accumulation effect at ~1uJ pulse energy for generating low loss waveguides, while stronger heat accumulation effects above 1-MHz repetition rate offered overall superior guiding. The waveguides were shown to be thermally stable up to 800°C, showing promise for high temperature applications. Using a low numerical aperture (0.4) lens, the effect of spherical aberration was reduced, enabling similar low-loss waveguides over an unprecedented 520-um depth range, opening the door for multi-level, three-dimensional, optical integrated circuits. In contrast to borosilicate glass, waveguides written in pure fused silica under similar conditions showed only little evidence of heat accumulation, yielding morphology similar to waveguides fabricated with low repetition rate (1 kHz) Ti-Sapphire lasers. Despite the absence of heat accumulation in fused silica owing to its large bandgap and high melting point, optimization of the laser wavelength, power, repetition rate, polarization, pulse duration and writing speed resulted in uniform, high-index contrast waveguide structures with low insertion loss. Optimum laser exposure recipes for waveguide formation in borosilicate and fused silica glass were applied to fabricate optical devices such as wavelength-sensitive and insensitive directional couplers for passive optical networks, buried and surface microfluidic and waveguide networks for lab-on-a-chip functionality, and narrowband grating waveguides for sensing.
22

Oversampling A/D Converters with Improved Signal Transfer Functions

Pandita, Bupesh 21 April 2010 (has links)
This thesis proposes a low-IF receiver architecture suitable for the realization of single-chip receivers. To alleviate the image-rejection requirements of the front-end filters an oversampling complex discrete-time ΔΣ ADC with a signal-transfer function that achieves a significant filtering of interfering signals is proposed. A filtering ADC reduces the complexity of the receiver by minimizing the requirements of analog filters in the IF digitization path. Discrete-time ΔΣ ADCs have precise resonant frequency and clock frequency ratios and, hence, do not require the calibration or tuning that is necessary in the case of continuous-time ΔΣ modulator implementations. This feature makes the proposed discrete- time ΔΣ ADC ideal for multistandard receiver applications.
23

Automatic Program Parallelization Using Traces

Bradel, Borys 16 March 2011 (has links)
We present a novel automatic parallelization approach that uses traces. Our approach uses a binary representation of a program, allowing for the parallelization of programs even if their full source code is not available. Furthermore, traces can represent both iteration and recursion. We use hardware transactional memory (HTM) to ensure correct execution in the presence of dependences. We describe a parallel trace execution model that allows sequential programs to execute in parallel. In the model, traces are identified by a trace collection system (TCS), the program is transformed to allow the traces to execute on multiple processors, and the traces are executed in parallel. We present a framework with four components that, with a TCS, realizes our execution model. The grouping component groups traces into tasks to reduce overhead and make identification of successor traces easier. The packaging component allows tasks to execute on multiple processors. The dependence component deals with dependences on reduction and induction variables. In addition, transactions are committed in sequential program order on an HTM system to deal with dependences that are not removed. Finally, the scheduler assigns tasks to processors. We create a prototype that parallelizes programs and uses an HTM simulator to deal with dependences. To overcome the limitations of simulation, we also create another prototype that automatically parallelizes programs on a real system. Since HTM is not used, only dependences on induction and reduction variables are handled. We demonstrate the feasibility of our trace-based parallelization approach by performing an experimental evaluation on several recursive and loop-based Java programs. On the HTM system, the average speedup of the computational phase of the benchmarks on four processors is 2.79. On a real system, the average speedup on four processors is 1.83. Therefore, the evaluation indicates that trace-based parallelization can be used to effectively parallelize recursive and loop-based Java programs based on their binary representation.
24

Planar Leaky-Wave Antennas and Microwave Circuits by Practical Surface Wave Launching

Podilchak, SYMON 01 October 2013 (has links)
Modern communication systems have increased the need for creative antenna solutions and low-profile circuit configurations that can offer high-quality performance at a low cost. The microwave and millimeter-wave frequency ranges have shown much promise allowing for increased data transmission rates while also offering smaller and compact designs. Specific applications for these wireless systems include radar, biomedical sensors, phased arrays, and communication devices. Planar antennas and circuits are generally well adopted for these applications due to their low profile and ease of fabrication. However, classic feeding techniques for planar structures can be problematic. Losses can also be observed in these conventional feeding schemes due to unwanted surface wave (SW) excitation. This can lead to reduced antenna and circuit efficiencies, and thus, diminished system performance. It is shown in this thesis that by the use of planar SW sources, or surface-wave launchers (SWLs), innovative and efficient antennas and feed systems are possible. Theoretical analysis and experimental verification for these SWLs are initially presented. New topologies and array configurations are also examined for directive beam steering at end-fire and at broadside. Additionally, studied structures include novel surface-wave antennas (SWAs) and leaky-wave antennas (LWAs) for 3-D beam pattern control in the far-field. A comprehensive design strategy is also examined which describes the implementation of these planar antennas using SWLs. This design strategy is based on a full-wave analysis of the modes that can be supported by the planar structures which include various planar-periodic metallic strip configurations and partially reflecting surfaces (PRSs) or screens. With appropriate conditions SWs can also be bound and guided for field channeling and power routing. For instance, novel planar metallic SW lenses and guidance structures are developed. Demonstrated applications include couplers, transition sections, as well as new planar circuits for power dividing/combining. To the author's knowledge, similar techniques have not been previously studied in the literature which allow for such controlled SW propagation and radiation. This way, SWs, which are normally considered an unwanted effect are used here to advantage. / Thesis (Ph.D, Electrical & Computer Engineering) -- Queen's University, 2013-09-30 08:29:04.107
25

Design and Practical Implementation of Digital Auto-tuning and Fast-response Controllers for Low-power Switch-mode Power Supplies

Zhao, Zhenyu 01 August 2008 (has links)
In switched-mode power supplies (SMPS), a Controller is required for output voltage or current regulation. In low-power SMPS, processing power from a fraction of watt to several hundred watts, digital implementations of the controller, i.e. digital controllers have recently emerged as alternatives to the predominately used analog systems. This is mostly due to the better design portability, power management capability, and the potential for implementing advanced control techniques, which are not easy to realize with analog hardware. However, the existing digital implementations are barely functional replicas of analog designs, having comparable dynamic performance if not poorer. Due to stringent constraints on hardware requirements, the digital systems have not been able to demonstrate some of their most attractive features, such as parameter estimation, controller auto-tuning, and nonlinear time-optimal control for improved transient response. This thesis presents two novel digital controllers and systems. The first is an auto-tuning controller that can be implemented with simple hardware and is suitable for IC integration. The controller estimates power stage parameters, such as output capacitance, load resistance, corner frequency and damping factor by examining the amplitude and frequency of intentionally introduced limit cycle oscillations. Accordingly, a digital PID compensator is automatically redesigned and the power stage is adapted to provide good dynamic response and high power processing efficiency. Compared to state of the art analog solutions, the controller has similar bandwidth and improves overall efficiency. To break the control bandwidth limitation associated with the sampling effects of PWM controllers, the second part of the thesis develops a nonlinear dual-mode controller. In steady state, the controller behaves as a conventional PWM controller, and during transients it utilizes a continuous-time digital signal processor (CT-DSP) to achieve time-optimal response. The processor performs a capacitor charge balance based algorithm to achieve voltage recovery through a single on-off sequence of the power switches. Load transient response with minimal achievable voltage deviation and a recovery time approaching physical limitations of a given power stage is obtained experimentally.
26

Formal Methods in Automated Design Debugging

Safarpour, Sean Arash 28 September 2009 (has links)
The relentless growth in size and complexity of semiconductor devices over the last decades continues to present new challenges to the electronic design community. Today, functional debugging is a bottleneck that jeopardizes the future growth of the industry as it can account for up to 30% of the overall design effort. To alleviate the manual debugging burden for industrial problems, scalable, practical and robust automated debugging solutions are required. This dissertation presents novel techniques and methodologies to bridge the gap between current capabilities of automated debuggers and the strict industry requirements. The contributions proposed leverage powerful advancements made in the formal method community, such as model checking and reasoning engines, to significantly ease the debugging effort. The first contribution, abstraction and refinement, is a systematic methodology that reduces the complexity of debugging problems by abstracting irrelevant sections of the circuits under analysis. Powerful abstraction techniques are developed for netlists as well as hierarchical and modular designs. Experiments demonstrate that an abstraction and refinement methodology requires up to 200 times less run-time and 27 times less memory than a state-of-the-art debugger. The second contribution, Bounded Model Debugging (BMD), is a debugging methodology based on the observation that erroneous behaviour is more likely caused by errors excited temporally close to observation points. BMD systematically generates a series of consecutively larger yet more complete debugging problems to be solved. Experiments show the effectiveness of BMD as 93% of the large problems are solved with BMD versus 34% without BMD. A third contribution is an automated debugging formulation based on maximum satisfiability. The formulation is used to build a powerful two step, coarse and fine grained debugging framework providing up to 980 times performance improvements. The final contribution of this thesis is a trace reduction technique that uses reachability analysis to identify the observed failure with fewer simulation events. Experiments demonstrate that many redundant state transitions can be removed resulting in traces with up to 100 times fewer events than the original.
27

Quadrature Down-converter for Wireless Communications

Farsheed, Mahmoudi 30 August 2012 (has links)
Future generation of wireless systems will feature high data rates and be implemented in low voltage CMOS technologies. Direct conversion receivers (DCRs) will be used in such systems which will require low voltage RF front-ends with adequate linearity. The down-converter in a DCR is a critical block in determining linearity. In addition to detailed DCR modeling in MATLAB, this thesis, completed in 2005, deals with the design and characterization of a 1V, 8GHz quadrature down-converter. It consists of two mixers and a quadrature generator implemented in a 0.18m CMOS technology. The mixer architecture proposed in this work uses a new trans-conductor. It simultaneously satisfies the low voltage and high linearity requirements. It also relaxes the inherent trade-off between gain and linearity governing CMOS active mixers. The implemented mixer occupies an area of 320 x 400 m2 and exhibits a power conversion gain of +6.5dB, a P-1dB of -5.5dBm, an IIP3 of +3.5dBm, an IIP2 of better than +48dBm, a noise figure of 11.5dB, an LO to RF isolation of 60dB at 8GHz and consumes 6.9mW of power from a 1V supply. The proposed quadrature generator circuit features a new architecture which embeds the quadrature generation scheme into the LO-buffer using active inductors. The circuit offers easy tune-ability for process, supply and temperature variations by relaxing the coupling between amplitude and phase tuning of the outputs. The implemented circuit occupies an area of 150 x 90m2 and exhibits an amplitude and quadrature phase accuracy of 1 dB and 1.5° respectively over a bandwidth of 100 MHz with a power consumption of 12mW from a 1V supply including the LO-buffer. The quadrature down-converter features an image rejection ratio of better than 40 dB and satisfies the potential target specifications of future mobile phones, extracted in this work.
28

Robustness and Vulnerability Design for Autonomic Management

Bigdeli, Alireza 20 August 2012 (has links)
This thesis presents network design and operations algorithms suitable for use in an autonomic management system for communication networks with emphasis on network robustness. We model a communication network as a weighted graph and we use graph-theoretical metrics such as network criticality and algebraic connectivity to quantify robustness. The management system under consideration is composed of slow and fast control loops, where slow loops manage slow-changing issues of the network and fast loops react to the events or demands that need quick response. Both of control loops drive the process of network management towards the most robust state. We fist examine the topology design of networks. We compare designs obtained using different graph metrics. We consider well-known topology classes including structured and complex networks, and we provide guidelines on the design and simplification of network structures. We also compare robustness properties of several data center topologies. Next, the Robust Survivable Routing (RSR) algorithm is presented to assign working and backup paths to online demands. RSR guarantees 100% single-link-failure recovery as a path-based survivable routing method. RSR quanti es each path with a value that represents its sensitivity to incremental changes in external traffic and topology by evaluating the variations in network criticality of the network. The path with best robustness (path that causes minimum change in total network criticality) is chosen as primary (secondary) path. In the last part of this thesis, we consider the design of robust networks with emphasis on minimizing vulnerability to single node and link failures. Our focus in this part is to study the behavior of a communication network in the presence of node/link failures, and to optimize the network to maximize performance in the presence of failures. For this purpose, we propose new vulnerability metrics based on the worst case or the expected value of network criticality or algebraic connectivity when a single node/link failure happens. We show that these vulnerability metrics are convex (or concave) functions of link weights and we propose convex optimization problems to optimize each vulnerability metric. In particular, we convert the optimization problems to SDP formulation which leads to a faster implementation for large networks.
29

Robust Subject Recognition Using the Electrocardiogram

Agrafioti, Foteini 30 July 2008 (has links)
This thesis studies the applicability of the electrocardiogram signal (ECG) as a biometric. There is strong evidence that heart's electrical activity embeds highly distinctive characteristics, suitable for applications such as the recognition of human subjects. Such systems traditionally provide two modes of functionality, identification and authentication; frameworks for subject recognition are herein proposed and analyzed in both scenarios. As in most pattern recognition problems, the probability of mis-classification error decreases as more learning information becomes available. Thus, a central consideration is the design and evaluation of algorithms which exploit the added information provided by the 12 lead standard ECG recording system. Feature and decision level fusion techniques described in thesis, offer enhanced security levels. The main novelty of the proposed approach, lies in the design of an identification system robust to cardiac arrhythmias. Criteria concerning the power distribution and information theoretic complexity of electrocardiogram windows are defined to signify abnormal ECG recordings, not suitable for recognition. Experimental results indicate high recognition rates and highlight identification based on ECG signals as very promising.
30

From Images to Maps

Appel, Ron 24 February 2009 (has links)
This work proposes a two-stage method that reconstructs the map of a scene from tagged photographs of that scene. In the first stage, several methods are proposed that transform tag data from the photographs into an intermediary distance matrix. These methods are compared against each other. In the second stage, an approach based on the physical mass-spring system is proposed that transforms the distance matrix into a map. This approach is compared against and outperforms MDS-MAP(P) when given human tagged input photographs. Experiments are carried out on two test datasets, one with 67 tags, and the other with 19. An evaluation method is described and the optimal overall reconstruction generates maps with accuracies of 47% and 66% respectively for the two test datasets, both scoring roughly 40% higher than a random reconstruction. The map reconstruction method is applied to three sample datasets and the resulting maps are qualitatively evaluated.

Page generated in 0.0196 seconds