• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 221
  • 80
  • 36
  • 26
  • 26
  • 10
  • 9
  • 9
  • 7
  • 3
  • 3
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 516
  • 161
  • 151
  • 70
  • 57
  • 52
  • 44
  • 43
  • 40
  • 37
  • 37
  • 36
  • 35
  • 35
  • 34
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
231

BER performance of 2x2 and 4x4 transmit diversity MIMO in downlink LTE

Uyoata, U.E., Noras, James M. 12 1900 (has links)
No / Multi-antenna(MIMO) techniques are reported to improve the performance of radio communication systems in terms of their capacity and spectral efficiency. In combination with appropriate receiver technologies they can also provide savings in the required transmit power with respect to target bit error rate. Long Term Evolution(LTE), one of the candidates for fourth generation(4G) mobile communication systems has MIMO as one of its underlying technologies and ITU defined channel models for its propagating environment. This paper undertakes a comprehensive verification of the performance of transmit diversity MIMO in the downlink sector of LTE. It uses models built using MATLAB to carry out simulations. It is deduced that generally increasing transmit diversity configuration from 2x2 to 4x4 offers SNR savings in flat fading channels though with a user equipment moving at 30km/hr, deploying 2x2 offers higher SNR saving below 7dB. Furthermore bandwidth variation has minimal effect on the BER performance of transmit MIMO except at SNR values above 9dB while the gains of higher modulation schemes come with a transmit power penalty.
232

Characterizing Retention behavior of DDR4 SoDIMM

Palani, Purushothaman 05 June 2024 (has links)
Master of Science / We are in an ever-increasing demand for computing power to sustain our technological advancements. A significant driving factor of our progress is the size and speed of memory we possess. Modern computer architectures use DDR4-based DRAM (Dynamic Random Access Memory) to hold all the immediate information for processing needs. Each bit in a DRAM memory module is implemented with a tiny capacitor and a transistor. Since the capacitors are prone to charge leakage, each bit must be frequently rewritten with its old value. A dedicated memory controller handles the periodic refreshes. If the cells aren't refreshed, the bits lose their charge and lose the information stored by flipping to either 0 or 1 (depending upon the design). Due to manufacturing variations, every tiny capacitor fabricated will have different physical characteristics. Charge leakage depends upon capacitance and other such physical properties. Hence, no two DRAM modules can have the same properties and decay pattern and cannot be reproduced again accurately. This DRAM attribute can be considered a source of 'Physically Unclonable Functions' and is sought after in the Cryptography domain. This thesis aims to characterize the decay patterns of commercial DDR4 DRAM modules. I implemented a custom System On Chip on AMD/Xilinx's ZCU104 FPGA platform to interface different DDR4 modules with a primitive memory controller (without refreshes). Additionally, I introduced electric and magnetic fields close to the DRAM module to investigate their effects on the decay characteristics.
233

Global Optimization of Transmitter Placement for Indoor Wireless Communication Systems

He, Jian 30 August 2002 (has links)
The DIRECT (DIviding RECTangles) algorithm JONESJOTi, a variant of Lipschitzian methods for bound constrained global optimization, has been applied to the optimal transmitter placement for indoor wireless systems. Power coverage and BER (bit error rate) are considered as two criteria for optimizing locations of a specified number of transmitters across the feasible region of the design space. The performance of a DIRECT implementation in such applications depends on the characteristics of the objective function, the problem dimension, and the desired solution accuracy. Implementations with static data structures often fail in practice because of unpredictable memory requirements. This is especially critical in S⁴W (Site-Specific System Simulator for Wireless communication systems), where the DIRECT optimization is just one small component connected to a parallel 3D propagation ray tracing modeler running on a 200-node Beowulf cluster of Linux workstations, and surrogate functions for a WCDMA (wideband code division multiple access) simulator are also used to estimate the channel performance. Any component failure of this large computation would abort the entire design process. To make the DIRECT global optimization algorithm efficient and robust, a set of dynamic data structures is proposed here to balance the memory requirements with execution time, while simultaneously adapting to arbitrary problem size. The focus is on design issues of the dynamic data structures, related memory management strategies, and application issues of the DIRECT algorithm to the transmitter placement optimization for wireless communication systems. Results for two indoor systems are presented to demonstrate the effectiveness of the present work. / Master of Science
234

Design and Implementation of a Practical FLEX Paging Decoder

McCulley, Scott L. 07 November 1997 (has links)
The Motorola Inc. paging protocol FLEX is discussed. The design and construction of a FLEX paging protocol decoder is discussed in detail. It proposes a decoding solution that includes a radio frequency (RF) receiver and a decoder board. The RF receiver will be briefly discussed. The decoder design is the main focus of this thesis as it transforms the RF frequency modulated (FM) data from the receiver and converts it to FLEX data words. The decoder is designed to handle bit sampling, bit clock synchronization, FLEX packet detection, and FLEX data word collection. The FLEX data words are then sent by the decoder to an external computer through a serial link for bit processing and storage. A FLEX transmitter will send randomly generated data so that a bit error rate (BER) calculation can be made at a PC. Each receiver'9s noise power and noise bandwidth will be measured so that noise spectral density may be calculated. A complete measurement set-up will be shown on how these noise measurements are made. The BER at a known power level is recorded. This enables Eb/No curves to be generated so that results of the decoding algorithm may be compared. This is performed on two different receivers. / Master of Science
235

Testing and Understanding Screwdriver Bit Wear

Adler, W. Alexander III 28 May 1998 (has links)
This thesis is focused on gaining a better knowledge of how to design and test Phillips screwdriver bits. Wear is the primary concern in applications where the bit is used in a power driver. Such applications include drywalling, decking and other construction and home projects. To pursue an optimal design, designers must have an understanding how the bit geometry changes with wear. To make use of the geometrical data, the designer must also have an understanding of the fundamentals of the bit/screw surface contact and its effect on force distribution. This thesis focuses on three areas. First, understanding how the tool and bit are used, and what factors contribute to bit wear. With this understanding, a test rig has been designed to emulate typical users and, in doing so, produce the factors that cause wear. Second, there must be a means to analyze geometric changes in the bit as it wears. A method for doing this was developed and demonstrated for a Phillips bit, but the process can be applied to other bits. Finally, the fundamentals of surface contact must be understood in order to apply the geometrical information obtained to improved bit design. / Master of Science
236

On communication with Perfect Feedback against Bit-flips and Erasures

Shreya Nasa (18432009) 29 April 2024 (has links)
<p dir="ltr">We study the communication model with perfect feedback considered by Berlekamp (PhD Thesis, 1964), in which Alice wishes to communicate a binary message to Bob through a noisy adversarial channel, and has the ability to receive feedback from Bob via an additional noiseless channel. Berlekamp showed that in this model one can tolerate 1/3 fraction of errors (a.k.a., bit-flips or substitutions) with non-vanishing communication rate, which strictly improves upon the 1/4 error rate that is tolerable in the classical one-way communication setting without feedback. In the case when the channel is corrupted by erasures, it is easy to show that a fraction of erasures tending to 1 can be tolerated in the noiseless feedback setting, which also beats the 1/2 fraction that is maximally correctable in the no-feedback setting. In this thesis, we consider a more general perfect feedback channel that may introduce both errors and erasures. We show the following results:</p><p dir="ltr">1. If α, β ∈ [0, 1) are such that 3α + β < 1, then there exists a code that achieves a positive communication rate tolerating α fraction of errors and β fraction of erasures. Furthermore, no code can achieve a positive-rate in this channel when 3α + β ≥ 1.</p><p dir="ltr">2. For the case when 3α + β < 1, we compute the maximal asymptotic communication rate achievable in this setting.</p>
237

A Hybrid DSP and FPGA System for Software Defined Radio Applications

Podosinov, Volodymyr Sergiyovich 01 June 2011 (has links)
Modern devices provide a multitude of services that use radio frequencies in continual smaller packages. This size leads to an antenna used to transmit and receive information being usually very inefficient and a lot of power is wasted just to be able to transmit a signal. To mitigate this problem a new antenna was introduced by Dr. Manteghi that is capable of working efficiently across a large band. The antenna achieves this large band by doing quick frequency hopping across multiple channels. In order to test the performance of this antenna against more common antennas, a software radio was needed, such that tested antennas can be analyzed using multiple modulations. This paper presents a software defined radio system that was designed for the purpose of testing the bit-error rate of digital modulations schemes using described and other antennas. The designed system consists of a DSP, an FPGA, and commercially available modules. The combination allows the system to be flexible with high performance, while being affordable. Commercial modules are available for multiple frequency bands and capable of fast frequency switching required to test the antenna. The DSP board contains additional peripherals that allows for more complex projects in the future. The block structure of the system is also very educational as each stage of transmission and reception can be tested and observed. The full system has been constructed and tested using simulated and real signals. A code was developed for communication between commercial modules and the DSP, bit error rate testing, data transmission, signal generation, and signal reception. A graphical user interface (GUI) was developed to help user with information display and system control. This thesis describes the software-defined-radio design in detail and shows test results at the end. / Master of Science
238

COUPLED LAGRANGE-EULER MODEL FOR SIMULATION OF BUBBLY FLOW IN VERTICAL PIPES CONSIDERING TURBULENT 3D RANDOM WALKS MODELS AND BUBBLES INTERACTION EFFECTS

Ali Abd El Aziz Essa ., Mohamed 07 December 2012 (has links)
Una nueva aproximación euleriana-lagarangiana, en su forma de acople en dos vías, para la simulación de flujo de burbujas, agua-aire es presentada en la tesis, en la que se incluyen los efectos de las colisiones entre burbujas, así como las posibles roturas o coalescencia de burbujas. Esta aproximación utiliza el modelo Continuous Random Walk, CRW, para tener en cuenta las fluctuaciones de la velocidad. Esta aproximación se enmarca dentro de un modelo de turbulencia k-epsilon para la fase continua del líquido. En esta tesis se estudiarán los métodos para realizar el acople entre ambas aproximaciones, el efecto de la fuerza lift y de la dispersión turbulenta sobre la distribución de la fracción de huecos, así como los modelos de coalescencia y rotura de burbujas que puedan ser empleados en este tipo de aproximación. Se ha partido de un código euleriano para simular la parte continua, y sobre él se ha acoplado la aproximación lagrangiana. Para que ese acople afecte a la fase continua sobre su solver ser han añadido fuentes de momento y turbulencia. Además se ha modificado el volumen computacional de cada celda para que tenga en consideración el volumen ocupado por la fase dispersa. El acople en doble vía hace que los perfiles de velocidad y turbulencia de la fase continua se modifiquen notablemente y que se aproximen a los reales, lo que resulta básico para la correcta simulación de las fuerzas interfaciales. La colisión entre burbujas, y burbujas y pared se ha incluido. Este efecto es necesario como paso previo a incluir los procesos de rotura o coalescencia de burbujas, aunque la colisión en sí tenga efectos limitados en la distribución de la fracción de huecos. El proceso de coalescencia se basa en el modelo de Chester ( 1991 ) , el modelo compara el tiempo de colisión con el tiempo de drenaje de la película entre burbujas para determinar si existe o no coalescencia. El modelo de rotura se basa en el modelo de Martínez-Bazán. Uno de los principales hitos de / Ali Abd El Aziz Essa ., M. (2012). COUPLED LAGRANGE-EULER MODEL FOR SIMULATION OF BUBBLY FLOW IN VERTICAL PIPES CONSIDERING TURBULENT 3D RANDOM WALKS MODELS AND BUBBLES INTERACTION EFFECTS [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/18068
239

Design and implementation of decoders for error correction in high-speed communication systems

Català Pérez, Joan Marc 01 September 2017 (has links)
This thesis is focused on the design and implementation of binary low-density parity-check (LDPC) code decoders for high-speed modern communication systems. The basic of LDPC codes and the performance and bottlenecks, in terms of complexity and hardware efficiency, of the main soft-decision and hard-decision decoding algorithms (such as Min-Sum, Optimized 2-bit Min-Sum and Reliability-based iterative Majority-Logic) are analyzed. The complexity and performance of those algorithms are improved to allow efficient hardware architectures. A new decoding algorithm called One-Minimum Min-Sum is proposed. It reduces considerably the complexity of the check node update equations of the Min-Sum algorithm. The second minimum is estimated from the first minimum value by a means of a linear approximation that allows a dynamic adjustment. The Optimized 2-bit Min-Sum algorithm is modified to initialize it with the complete LLR values and to introduce the extrinsic information in the messages sent from the variable nodes. Its variable node equation is reformulated to reduce its complexity. Both algorithms were tested for the (2048,1723) RS-based LDPC code and (16129,15372) LDPC code using an FPGA-based hardware emulator. They exhibit BER performance very close to Min-Sum algorithm and do not introduce early error-floor. In order to show the hardware advantages of the proposed algorithms, hardware decoders were implemented in a 90 nm CMOS process and FPGA devices based on two types of architectures: full-parallel and partial-parallel one with horizontal layered schedule. The results show that the decoders are more area-time efficient than other published decoders and that the low-complexity of the Modified Optimized 2-bit Min-Sum allows the implementation of 10 Gbps decoders in current FPGA devices. Finally, a new hard-decision decoding algorithm, the Historical-Extrinsic Reliability-Based Iterative Decoder, is presented. This algorithm introduces the new idea of considering hard-decision votes as soft-decision to compute the extrinsic information of previous iterations. It is suitable for high-rate codes and improves the BER performance of the previous RBI-MLGD algorithms, with similar complexity. / Esta tesis se ha centrado en el diseño e implementación de decodificadores binarios basados en códigos de comprobación de paridad de baja densidad (LDPC) válidos para los sistemas de comunicación modernos de alta velocidad. Los conceptos básicos de códigos LDPC, sus prestaciones y cuellos de botella, en términos de complejidad y eficiencia hardware, fueron analizados para los principales algoritmos de decisión soft y decisión hard (como Min-Sum, Optimized 2-bit Min-Sum y Reliability-based iterative Majority-Logic). La complejidad y prestaciones de estos algoritmos se han mejorado para conseguir arquitecturas hardware eficientes. Se ha propuesto un nuevo algoritmo de decodificación llamado One-Minimum Min-Sum. Éste reduce considerablemente la complejidad de las ecuaciones de actualización del nodo de comprobación del algoritmo Min-Sum. El segundo mínimo se ha estimado a partir del valor del primer mínimo por medio de una aproximación lineal, la cuál permite un ajuste dinámico. El algoritmo Optimized 2-bit Min-Sum se ha modificado para ser inicializado con los valores LLR e introducir la información extrínseca en los mensajes enviados desde los nodos variables. La ecuación del nodo variable de este algoritmo ha sido reformulada para reducir su complejidad. Ambos algoritmos fueron probados para el código (2048,1723) RS-based LDPC y para el código (16129,15372) LDPC utilizando un emulador hardware implementado en un dispositivo FPGA. Éstos han alcanzado unas prestaciones de BER muy cercanas a las del algoritmo Min-Sum evitando, además, la aparición temprana del fenómeno denominado suelo del error. Con el objetivo de mostrar las ventajas hardware de los algoritmos propuestos, los decodificadores se implementaron en hardware utilizando tecnología CMOS de 90 nm y en dispositivos FPGA basados en dos tipos de arquitecturas: completamente paralela y parcialmente paralela utilizando el método de actualización por capas horizontales. Los resultados muestran que los decodificadores propuestos e implementados son más eficientes en área-tiempo que otros decodificadores publicados y que la baja complejidad del algoritmo Modified Optimized 2-bit Min-Sum permite la implementación de decodificadores en los dispositivos FPGA actuales consiguiendo una tasa de 10 Gbps. Finalmente, se ha presentado un nuevo algoritmo de decodificación de decisión hard, el Historical-Extrinsic Reliability-Based Iterative Decoder. Este algoritmo introduce la nueva idea de considerar los votos de decisión hard como decisión soft para calcular la información extrínseca de iteracions anteriores. Este algoritmo es adecuado para códigos de alta velocidad y mejora el rendimiento BER de los algoritmos RBI-MLGD anteriores, con una complejidad similar. / Aquesta tesi s'ha centrat en el disseny i implementació de descodificadors binaris basats en codis de comprovació de paritat de baixa densitat (LDPC) vàlids per als sistemes de comunicació moderns d'alta velocitat. Els conceptes bàsics de codis LDPC, les seues prestacions i colls de botella, en termes de complexitat i eficiència hardware, van ser analitzats pels principals algoritmes de decisió soft i decisió hard (com el Min-Sum, Optimized 2-bit Min-Sum y Reliability-based iterative Majority-Logic). La complexitat i prestacions d'aquests algoritmes s'han millorat per aconseguir arquitectures hardware eficients. S'ha proposat un nou algoritme de descodificació anomenat One-Minimum Min-Sum. Aquest redueix considerablement la complexitat de les equacions d'actualització del node de comprovació del algoritme Min-Sum. El segon mínim s'ha estimat a partir del valor del primer mínim per mitjà d'una aproximació lineal, la qual permet un ajust dinàmic. L'algoritme Optimized 2-bit Min-Sum s'ha modificat per ser inicialitzat amb els valors LLR i introduir la informació extrínseca en els missatges enviats des dels nodes variables. L'equació del node variable d'aquest algoritme ha sigut reformulada per reduir la seva complexitat. Tots dos algoritmes van ser provats per al codi (2048,1723) RS-based LDPC i per al codi (16129,15372) LDPC utilitzant un emulador hardware implementat en un dispositiu FPGA. Aquests han aconseguit unes prestacions BER molt properes a les del algoritme Min-Sum evitant, a més, l'aparició primerenca del fenomen denominat sòl de l'error. Per tal de mostrar els avantatges hardware dels algoritmes proposats, els descodificadors es varen implementar en hardware utilitzan una tecnologia CMOS d'uns 90 nm i en dispositius FPGA basats en dos tipus d'arquitectures: completament paral·lela i parcialment paral·lela utilitzant el mètode d'actualització per capes horitzontals. Els resultats mostren que els descodificadors proposats i implementats són més eficients en àrea-temps que altres descodificadors publicats i que la baixa complexitat del algoritme Modified Optimized 2-bit Min-Sum permet la implementació de decodificadors en els dispositius FPGA actuals obtenint una taxa de 10 Gbps. Finalment, s'ha presentat un nou algoritme de descodificació de decisió hard, el Historical-Extrinsic Reliability-Based Iterative Decoder. Aquest algoritme presenta la nova idea de considerar els vots de decisió hard com decisió soft per calcular la informació extrínseca d'iteracions anteriors. Aquest algoritme és adequat per als codis d'alta taxa i millora el rendiment BER dels algoritmes RBI-MLGD anteriors, amb una complexitat similar. / Català Pérez, JM. (2017). Design and implementation of decoders for error correction in high-speed communication systems [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/86152
240

Estimation of Dulling Rate and Bit Tooth Wear Using Drilling Parameters and Rock Abrasiveness

Mazen, Ahmed Z., Rahmanian, Nejat, Mujtaba, Iqbal, Hassanpour, A. January 2019 (has links)
No / Optimisation of the drilling operations is becoming increasingly important as it can significantly reduce the oil well development cost. One of the major objectives in oil well drilling is to increase the penetration rate by selecting the optimum drilling bit based on offset wells data, and adjust the drilling factors to keep the bit in good condition during the operation. At the same time, it is important to predict the bit wear and the time to pull out the bit out of hole to prevent fishing jobs. Numerous models have been suggested in the literature for predicting the time to pull the bit out to surface rather than predict or estimate the bit wear rate. Majority of the available models are largely empirical and can be applied for limited conditions, and do not include all the drilling parameters such as the formation abrasiveness and bit hydraulic. In this paper, a new approach is presented to improve the drill bit wear estimation that consists of a combination of both Bourgoyne and Young (BY) drilling rate model and theory of empirical relation for the effects of rotary speed (RPM), and weight on bit (WOB) on drilling arte (ROP) and rate of tooth wear. In addition to the drilling parameters, the formation abrasiveness and the effect of the jet impact force of the mud have also been accounted to estimate the bit wear. The proposed model enables estimation of the rock abrasiveness, and that lead to calculate the dynamic dulling rate of the bit while drilling that used in more accurate to assess the bit tooth wear compared with the mechanical specific energy (MSE). Then the estimated dulling rate at the depth of pulling out is used to determine the dull grade of the bit. The technique is validated in five wells located in two different oil fields in Libya. All studied wells in this showed a good agreement between the actual bit tooth wear and the estimated bit tooth wear.

Page generated in 0.0513 seconds