• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 453
  • 58
  • 2
  • Tagged with
  • 513
  • 513
  • 513
  • 513
  • 23
  • 20
  • 20
  • 18
  • 18
  • 18
  • 16
  • 16
  • 15
  • 15
  • 15
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

GPU Acceleration of the Variational Monte Carlo Method for Many Body Physics

Rajagopalan, Kaushik Ragavan 21 April 2013 (has links)
High-Performance computing is one of the major areas making inroads into the future for large-scale simulation. Applications such as 3D nuclear test, Molecular Dynamics, and Quantum Monte Carlo simulations are now developed on supercomputers using the latest computing technologies. As per the TOP500 supercomputers rating, most of todays supercomputers are now heterogeneous: with massively parallel Graphics Processing Units (GPU) equipped with Multi-core CPU(s) to increase the computational capacity. The Variational Monte Carlo(VMC) method is used in the Many Body Physics to study the ground state properties of a system. The wavefunction depends on some variational parameters, which contain the physics for a better prediction. In general, the variational parameters are chosen to realize some sort of order or broken symmetry such as superconductivity and magnetism. The variational approach is computationally expensive and requires a large number of Markov chains (MCs) to obtain convergence. The MCs exhibit abundant data parallelism and parallelizing across CPU clusters will prove to be expensive and does not scale in proportion to the system size. Hence, this method will be a suitable candidate on a massively parallel Graphics Processing Unit (GPU). In this research, we discuss about the various optimization and parallelization strategies adopted to port the VMC method to a NVIDIA GPU using CUDA. We obtained a speedup of nearly 3.85 X compared to the MPI implementation [4] and a speedup of upto 19 X compared to an object-oriented C++ code.
52

Application of Non-linear Optimization Techniques in Wireless Telecommunication Systems

Kohandani, Farzaneh January 2006 (has links)
Non-linear programming has been extensively used in wireless telecommunication systems design. An important criterion in optimization is the minimization of mean square error. This thesis examines two applications: peak to average power ratio (PAPR) reduction in orthogonal frequency division multiplexing (OFDM) systems and wireless airtime traffic estimation. These two applications are both of interests to wireless service providers. PAPR reduction is implemented in the handheld devices and low complexity is a major objective. On the other hand, exact traffic prediction can save a huge cost for wireless service providers by better resource management through off-line operations. <br /><br /> High PAPR is one of the major disadvantages of OFDM system which is resulted from large envelope fluctuation of the signal. Our proposed technique to reduce the PAPR is based on constellation shaping that starts with a larger constellation of points, and then the points with higher energy are removed. The constellation shaping algorithm is combined with peak reduction, with extra flexibilities defined to reduce the signal peak. This method, called MMSE-Threshold, has a significant improvement in PAPR reduction with low computational complexity. <br /><br /> The peak reduction formulated into a quadratic minimization problem is subsequently optimized by the semidefinite programming algorithm, and the simulation results show that the PAPR of semidefinite programming algorithm (SDPA) has noticeable improvement over MMSE-Threshold while SDPA has higher complexity. Results are also presented for the PAPR minimization by applying optimization techniques such as hill climbing and simulated annealing. The simulation results indicate that for a small number of sub-carriers, both hill climbing and simulated annealing result in a significant improvement in PAPR reduction, while their degree of complexity can be very large. <br /><br /> The second application of non-linear optimization is in airtime data traffic estimation. This is a crucial problem in many organizations and plays a significant role in resource management of the company. Even a small improvement in the data prediction can save a huge cost for the organization. Our proposed method is based on the definition of extra parameters for the basic structural model. In the proposed technique, a novel search method that combines the maximum likelihood estimation with mean absolute percentage error of the estimated data is presented. Simulated results indicate a substantial improvement in the proposed technique over that of the basic structural model and seasonal autoregressive integrated moving average (SARIMA) package. In addition, this model is capable of updating the parameters when new data become available.
53

Threshold Voltage Instability and Relaxation in Hydrogenated Amorphous Silicon Thin Film Transistors

Akhavan Fomani, Arash January 2005 (has links)
This thesis presents a study of the bias-induced threshold voltage metastability phenomenon of the hydrogenated amorphous silicon (a-Si:H) thin film transistors (TFTs). An application of gate bias stress shifts the threshold voltage of a TFT. After the bias stress is removed, the threshold voltage eventually returns to its original value. The underlying physical mechanisms for the shift in threshold voltage during the application of the bias and after the removal of the bias stress are investigated. <br /><br /> The creation of extra defect states in the band gap of a-Si:H close to the gate dielectric interface, and the charge trapping in the silicon nitride (SiN) gate dielectric are the most commonly considered instability mechanisms of threshold voltage. In the first part of this work, the defect state creation mechanism is reviewed and the kinetics of the charge trapping in the SiN is modelled assuming a simplified mono-energetic and a more realistic Gaussian distribution of the SiN traps. The charge trapping in the mono-energetic SiN traps was approximated by a logarithmic function of time. However, the charge trapping with a Gaussian distribution of SiN traps results in a more complex behavior. <br /><br /> The change in the threshold voltage of a TFT after the gate bias has been removed is referred to threshold voltage relaxation, and it is investigated in the second part of this work. A study of the threshold voltage relaxation sheds more light on the metastability mechanisms of a-Si:H TFTs. Possible mechanisms considered for the relaxation of threshold voltage are the annealing of the extra defect states and the charge de-trapping from the SiN gate dielectric. The kinetics of the charge de-trapping from a mono-energetic and a Gaussian distribution of the SiN traps are analytically modelled. It is shown that the defect state annealing mechanisms cannot explain the observed threshold voltage relaxation, but a study of the kinetics of charge de-trapping helps to bring about a very good agreement with the experimentally obtained results. Using the experimentally measured threshold voltage relaxation results, a Gaussian distribution of gap states is extracted for the SiN. This explains the threshold voltage relaxation of TFT after the bias stress with voltages as high as 50V is removed. <br /><br /> Finally, the results obtained from the threshold voltage relaxation make it possible to calculate the total charge trapped in the SiN and to quantitatively distinguish between the charge trapping mechanism and the defect state creation mechanisms. In conclusion, for the TFTs used in this thesis, the charge trapping in the SiN gate dielectric is shown to be the dominant threshold voltage metastability mechanism caused in short bias stress times.
54

Channel Estimation and Equalization for Cooperative Communication

Mheidat, Hakam January 2006 (has links)
The revolutionary concept of space-time coding introduced in the last decade has demonstrated that the deployment of multiple antennas at the transmitter allows for simultaneous increase in throughput and reliability because of the additional degrees of freedom offered by the spatial dimension of the wireless channel. However, the use of antenna arrays is not practical for deployment in some practical scenarios, e. g. , sensor networks, due to space and power limitations. <br /><br /> A new form of realizing transmit diversity has been recently introduced under the name of user cooperation or cooperative diversity. The basic idea behind cooperative diversity rests on the observation that in a wireless environment, the signal transmitted by the source node is overheard by other nodes, which can be defined as "partners" or "relays". The source and its partners can jointly process and transmit their information, creating a "virtual antenna array" and therefore emulating transmit diversity. <br /><br /> Most of the ongoing research efforts in cooperative diversity assume frequency flat channels with perfect channel knowledge. However, in practical scenarios, e. g. broadband wireless networks, these assumptions do not apply. Frequency-selective fading and imperfect channel knowledge should be considered as a more realistic channel model. The development of equalization and channel estimation algorithms play a crucial element in the design of digital receivers as their accuracy determine the overall performance. <br /><br /> This dissertation creates a framework for designing and analyzing various time and frequency domain equalization schemes, i. e. distributed time reversal (D-TR) STBC, distributed single carrier frequency domain (D-SC-FDE) STBC, and distributed orthogonal frequency division multiplexing (D-OFDM) STBC schemes, for broadband cooperative communication systems. Exploiting the orthogonally embedded in D-STBCs, we were able to maintain low-decoding complexity for all underlying schemes, thus, making them excellent candidates for practical scenarios, such as multi-media broadband communication systems. <br /><br /> Furthermore, we propose and analyze various non-coherent and channel estimation algorithms to improve the quality and reliability of wireless communication networks. Specifically, we derive a non-coherent decoding rule which can be implemented in practice by a Viterbi-type algorithm. We demonstrate through the derivation of a pairwise error probability expression that the proposed non-coherent detector guarantees full diversity. Although this decoding rule has been derived assuming quasi-static channels, its inherent channel tracking capability allows its deployment over time-varying channels with a promising performance as a sub-optimal solution. As a possible alternative to non-coherent detection, we also investigate the performance of mismatched-coherent receiver, i. e. , coherent detection with imperfect channel estimation. Our performance analysis demonstrates that the mismatched-coherent receiver is able to collect the full diversity as its non-coherent competitor over quasi-static channels. <br /><br /> Finally, we investigate and analyze the effect of multiple antennas deployment at the cooperating terminals assuming different relaying techniques. We derive pairwise error probability expressions quantifying analytically the impact of multiple antenna deployment at the source, relay and/or destination terminals on the diversity order for each of the relaying methods under consideration.
55

Design of CMOS Distributed Amplifiers for Broadband Wireline and Wireless Communication Applications

Khodayari Moez, Kambiz January 2006 (has links)
While the RF building blocks of narrowband system-on-chip designs have increasingly been created in CMOS during the past decade, researchers have started to look at the possibility of implementation of broadband transceivers in CMOS technology. High speed optical links with operating frequencies of up to 40 GHz and ultra wideband (UWB) wireless systems operating in 3 to 10 GHz frequency band are examples of these broadband applications. CMOS offers a low fabrication cost, and a higher level of integration compared with compound semiconductor technologies that currently claim broadband RFIC applications. <br /><br /> In this work, we focus on the design of broadband low-noise amplifiers: the fundamental building blocks of high data rate wireline and wireless telecommunication systems. A well established microwave engineering technique -distributed amplification- with a potential bandwidth up to the cut-off frequency of transistors is employed. However, the implementation of distributed amplifiers in CMOS imposes new challenges, such as gain attenuation because of substrate loss of on-chip inductors, a typical large die area, and a large noise-figure. These problems have been addressed in this dissertation as described below. <br /><br /> On-chip inductors, the essential components of the distributed amplifiers' gate and drain transmission lines, dissipate more and more power in silicon substrates as well as in metal lines as frequency increases, which in turn reduces the gain and deteriorates the input/output matching. Using active negative resistors implemented by a capacitively source degenerated configuration, we have fully compensated the loss of the transmission lines in order to achieve a flat gain of 10 dB over the entire DC-to-44 GHz bandwidth. <br /><br /> We have addressed another drawback of distributed amplifiers, large die area, by utilizing closely-placed RF transmission lines instead of spiral inductors. Because of a more compact implementation of transmission lines, the area of the distributed amplifiers is considerably reduced at the expense of extra design steps required for the modeling of the closely-placed RF transmission lines. A post-layout simulation method is developed to take into account the effect of inductive and capacitive coupling by incorporating a 3D EM simulator into the design process. A 9-dB 27-GHz distributed amplifier has been fabricated in an area as small as 0. 17 <em>mm</em><sup>2</sup> using 180nm TSMC's CMOS process. <br /><br /> For wireless applications (UWB), a very low-noise figure is required for the broadband preamplifier. Conventional distributed amplifiers fail to provide a low noise figure mainly because of the noise injected by the terminating resistor of the gate transmission lines. We have replaced the terminating resistor with a frequency-dependent resistor which trades off the low frequency input matching of the distributed amplifier (not required for UWB) with a better noise performance. Our proposed design provides a gain of 12 dB with an average noise figure of 3. 4 dB over the entire 3-10 GHz band, advancing the state-of-the-art implementation of broadband LNAs.
56

Data-Driven Fault Detection Using Trending Analysis

Luo, Min 11 October 2006 (has links)
The objective of this research is to develop data-driven fault detection methods which do not rely on mathematical models yet are capable of detecting process malfunctions. Instead of using mathematical models for comparing performances, the methods developed rely on extensive collection of data to establish classification schemes that detect faults in new data. The research develops two different trending approaches. One uses the normal data to define a one-class classifier. The second approach uses a data mining technique, e.g. support vector machine (SVM) to define multi class classifiers. Each classifier is trained on a set of example objects. The one-class classification assumes that only information of one of the classes, namely the normal class, is available. The boundary between the two classes, normal and faulty, is estimated from data of the normal class only. The research assumes that the convex hull of the normal data can be used to define a boundary separating normal and faulty data. The multi class classifier is implemented through several binary classifiers. It is assumed that data from two classes are available and the decision boundary is supported from both sides by example objects. In order to detect significant trends in the data the research implements a non-uniform quantization technique, based on Lloyds algorithm and defines a special subsequence-based kernel. The effect of the subsequence length is examined through computer simulations and theoretical analysis. The test bed used to collect data and implement the fault detection is a six degrees of freedom, rigid body model of a B747 100/200 and only faults in the actuators are considered. In order to thoroughly test the efficiency of the approach, the test use only sensor data that does not include manipulated variables. Even with this handicap the approach is effective with the average of 79.5% correct detection and 16.7% missed alarm and 3.9% false alarms for six different faults.
57

Generalized D-Sequences and Their Application to CDMA Systems

Vaddiraja, Radhika 09 June 2003 (has links)
Code Division Multiple Access (CDMA), a form of spread spectrum communications is used widely in cellular telephony. CDMA systems employ Walsh-Hadamard orthogonal codes, jointly with Pseudo-Noise (PN) sequences, Gold sequences and Kasami sequences to achieve spreading. This thesis investigates properties of generalized d-sequences and their applications as spreading sequences in CDMA systems. The correlation properties of these sequences are studied. The autocorrelation function of these sequences is not exactly two-valued but the cross correlation values are zero for certain class of these sequences. The zero cross correlation property can be useful in solving the near-far problem in CDMA communication systems, thus obviating the need for power control. The performance of these sequences is analyzed and their application to CDMA systems is investigated.
58

Blind Fault Detection Using Spectral Signatures

Chethan, Pallavi 10 June 2003 (has links)
This work studies a blind fault detection method, which only analyses a system's output signal for any change in the characteristics from pre-fault to post-fault to identify the occurrence of faults. In our case the fault considered to develop the procedure is change in time constant of an aircraft's aileron-actuator system and its simplified version - a position servo system. The method is studied as an alternative to conventional fault detection and identification methods. The output signal is passed through a filter bank to enhance the effect of a fault. The Short time Fourier transform is performed on the enhanced pre-fault and post-fault signals components to obtain indicators. Fault detection is approached as a clustering problem determining distances to fault signatures. This work presents two techniques to create signatures from the indicators. In the first method, the mean of the indicators is the signature. Tests on a position servo system show that the method effectively classifies the indicators by more than 85 % and can be used for online classification. A second method uses Principal Component Analysis and defines vector sub-space signatures. It is observed that for the position servo system, the pre-fault indicators had 14 % of false alarms and post-fault indicators the missed the faults by 17%. This second method was also applied to one axis model of an F-14 aircraft's aileron-actuator system. The results obtained showed around 80 % of correctly identified pre-fault indicators and post-fault indicators. The blind fault detection method studies has potential but needs to be understood further by applying it to more varied cases of faults and systems.
59

Traffic Engineering in Multiprotocol Label Switching Networks

Wei, Chung-Yu 29 August 2003 (has links)
The goal of Traffic Engineering is to optimize the resource utilization and increase the network performance. Constraint-based routing has been proposed as an networks effective approach to implement traffic engineering in Multiprotocol Label Switching. In this thesis, we review several algorithms on constraint-based routing from the literature and point out their advantages and disadvantages. We then propose several algorithms to overcome some of the shortcomings of these approaches. Our algorithms are specifically suitable for large densely connected networks supporting both Quality of Service traffic and the Best Effort traffic. In large networks the size of the MPLS label space in a node may become extremely large. Our algorithms allow for control on the size of the label space for each node in the network. In addition, explicit routes can be accommodated supporting both node and link affinity. We address an algorithm that implements the node and link affinity correctly. If the QoS traffic has stringent delay requirements, a path length limit can be imposed so that the number of hops on the path for such traffic is limited. Finally, we propose the 1 + 1 and 1 : 1 path protection mechanisms using the constraint-based routing in MPLS and establish backup for the working path carrying the primary traffic. Our approach appropriately overcome the problems and the result are satisfying.
60

Transmission of Electromagnetic Power through a Biological Medium

Das, Ripan 02 September 2003 (has links)
Primary goal of this work is to study transmission of EM power through a multilayered biological medium. For a particular case study, EM power transmission from an external transmitter to a coupled receiver implanted inside a biological medium simulating a human body is studied to find solutions for factors such as optimum transmission frequency and excitation current. Different aspects of interaction of EM waves with biological bodies and tissues are discussed. Two major factors that may affect transmission of EM power through a biological body are absorption and reflection of EM waves. A simulation in which exact Maxwell's equations are solved to find E field distribution in cross-sectional planes of a human body with the implanted receiver takes into account both absorption and reflection accurately. A simplified model for a human body with an implanted receiver and an external transmitter is developed here. Main motivation is to find E field distribution throughout the model and find energy density coupling between the transmitter and the receiver regions. Edge based finite element simulations are carried out on the model for a number of frequencies between 1 kHz and 9 GHz and frequency dependent values for EM properties such as relative permittivity and conductivity of biological tissues are used for all the simulations. Energy density coupling, E field coupling and S parameters showing reflection at the excitation port are obtained from the simulated results. Energy coupling is found to be almost constant with values near 0.01 between 1 kHz and 500 MHz. Current densities are below the thermal safe current density level even for an excitation current density of 3x10<sup>6</sup> A.m<sup>-2</sup> in the transmitter. Although model used for simulation is simplistic, the results obtained are useful to study EM power losses in a multilayered biological medium. Results can be applied to find safe limits of excitation current density for transmitting EM power through a biological medium such as a human body without causing any damage due to heating.

Page generated in 0.5015 seconds