• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 221
  • 80
  • 36
  • 26
  • 26
  • 10
  • 9
  • 9
  • 7
  • 3
  • 3
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 516
  • 161
  • 151
  • 70
  • 57
  • 52
  • 44
  • 43
  • 40
  • 37
  • 37
  • 36
  • 35
  • 35
  • 34
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
361

Design of a 16 GSps RF Sampling Resistive DAC with on-chip Voltage Regulator / Konstruktion av en 16 GSps resistiv digital-analogomvandlare med integrerad spänningsregulator

Thomsson, Pontus, Seyed Aghamiri, Cyrus January 2021 (has links)
Wireless communication technologies continue to evolve to meet the demand for increased data throughput. To achieve higher data throughput one approach is to increase the bandwidth. One problem related to very large bandwidths is the implementation of digital-to-analog converters with sampling rates roughly in the 5 to 20 GHz range. Traditionally, current-steering data converters have been the go-to choice but their linearity suffers at higher frequencies. An alternative to the current-steering digital-to-analog converter is the voltage-mode digital-to-analog converter, which is an attractive option for integration into digital intensive application-specific integrated circuits due to its digital-in-nature architecture. In this thesis, a resistive voltage-mode digital-to-analog converter with an integrated low-dropout voltage regulator is proposed for a sampling rate of 16 GSps. The proposed resistive voltage-mode digital-to-analog converter with an output impedance matched to a 100 Ω load, achieves a spurious-free dynamic range of 64 dBc and intermodulation distortion of 66 dBc for output frequencies up to 5.5 GHz in the worst process corner.
362

Probabilistic Computing: From Devices to Systems

Jan Kaiser (8346969) 22 April 2022 (has links)
<p>Conventional computing is based on the concept of bits which are classical entities that are either 0 or 1 and can be represented by stable magnets. The field of quantum computing relies on qubits which are a complex linear combination of 0 and 1. Recently, the concept of probabilistic computing with probabilistic (<em>p-</em>)bits was introduced where <em>p-</em>bits are robust classical entities that fluctuate between 0 and 1. <em>P-</em>bits can be naturally represented by low-barrier nanomagnets. Probabilistic computers (<em>p-</em>computers) based on <em>p-</em>bits are domain-based hardware accelerators for Monte Carlo algorithms that can efficiently address probabilistic tasks like sampling, optimization and machine learning. </p> <p>In this dissertation, starting from the intrinsic physics of nanomagnets, we show that a compact hardware implementation of a <em>p-</em>bit based on stochastic magnetic tunnel junctions (s-MTJs) can operate at high-speeds in the order of nanoseconds, a prediction that has recently received experimental support.</p> <p>We then move to the system level and illustrate by simulation and by experiment how multiple interconnected <em>p-</em>bits can be utilized to train a Boltzmann machine built with hardware <em>p-</em>bits. We observe that even non-ideal s-MTJs can be utilized for probabilistic computing when combined with hardware-aware learning.</p> <p>Finally, we show how to build a <em>p-</em>computer to accelerate a wide variety of problems ranging from optimization and sampling to quantum computing and machine learning. The common theme for all these applications is the underlying Monte Carlo and Markov chain Monte Carlo algorithms and their parallelism enabled by a unique <em>p-</em>computer architecture.</p>
363

Impacts des non-linéarités dans les systèmes multi-porteuses de type FBMC-OQAM / OFDM-FBMC performance in presence of non-linear high power amplifier

Bouhadda, Hanen 22 March 2016 (has links)
Dans cette thèse une étude des performances des systèmes OFDM et FBMC/OQAM en présence d'amplificateur de puissance sans mémoire en terme de TEB est présentée. Ensuite, nous avons proposé une technique de linéarisation d'AP par pré-distorsion adaptative neuronale. Aussi, nous avons proposé deux techniques de correction des non-linéarités au niveau du récepteur. / In our work, we have studied the impact of in-band non linear distortions caused by PA on both OFDM and FBMC/OQAM systems. A theoretical approach was proposed to evaluate the BER performance for the two systems. This approach is based on modeling the in-band non-linear distortion with a complex gain and an uncorrelated additive white Gaussian noise, given by the Bussgang theorem. Then, we have proposed different techniques to compensate this NLD either on the transmitter or the receiver sides.
364

Probability Density Function Estimation Applied to Minimum Bit Error Rate Adaptive Filtering

Phillips, Kimberly Ann 28 May 1999 (has links)
It is known that a matched filter is optimal for a signal corrupted by Gaussian noise. In a wireless environment, the received signal may be corrupted by Gaussian noise and a variety of other channel disturbances: cochannel interference, multiple access interference, large and small-scale fading, etc. Adaptive filtering is the usual approach to mitigating this channel distortion. Existing adaptive filtering techniques usually attempt to minimize the mean square error (MSE) of some aspect of the received signal, with respect to the desired aspect of that signal. Adaptive minimization of MSE does not always guarantee minimization of bit error rate (BER). The main focus of this research involves estimation of the probability density function (PDF) of the received signal; this PDF estimate is used to adaptively determine a solution that minimizes BER. To this end, a new adaptive procedure called the Minimum BER Estimation (MBE) algorithm has been developed. MBE shows improvement over the Least Mean Squares (LMS) algorithm for most simulations involving interference and in some multipath situations. Furthermore, the new algorithm is more robust than LMS to changes in algorithm parameters such as stepsize and window width. / Master of Science
365

Optimal Signaling Strategies and Fundamental Limits of Next-Generation Energy-Efficient Wireless Networks

Ranjbar, Mohammad 29 August 2019 (has links)
No description available.
366

Automated Generation of EfficientBitslice Implementations forArbitrary Sboxes / Automatiserad generering av effektiva bitvisaimplementeringar för godtyckliga lådor

Bariant, Augustin January 2023 (has links)
Whitebox cryptography aims at protecting standard cryptographic algorithmsthat execute in attacker-controlled environments. In these, the attacker is ableto read a secret key directly from memory. Common implementations mask alldata at runtime and operate on masked data by using many small precomputedtables. Practical whiteboxes involve trade-offs between security and executionspeed, to limit their footprints and enable applications such as real-time videostreaming.To improve this compromise, we study the use of bitslicing (or bitparallelism)to implement whiteboxes. Bitslicing is commonly used to writefast constant-time implementations of cryptographic algorithms and relies onthe synthesis of boolean circuits implementing the corresponding algorithms.The synthesis of optimal circuits for lookup tables is resource intensive andgenerally only performed once. In a whitebox context however, many randomlookup tables are generated at compile-time. We therefore require the booleancircuit generation to be time efficient.In this master thesis, we review the existing circuit-synthesis algorithms,and analyse their usability in the whitebox context. In particular, we studythe technique of Binary Decision Diagrams to generate efficient circuits ina cheap and adaptable manner. We implemented a flexible version of thisalgorithm as a C++ library. Eventually, we go through different techniques toevaluate the generated circuits and analyse the performances of our algorithm,and recommand the best parameters for the whitebox context. / Vit-låda kryptografi syftar till att skydda kryptografiska standardalgoritmersom körs i miljöer som kontrolleras av angripare, där angriparen kan läsa enhemlig nyckel direkt från minnet. Vanliga tillämpningar maskerar alla data vidkörning och bearbetar maskerade data med hjälp av många små förberäknadetabeller. Praktiska vit-låda innebär att man måste göra avvägningar mellansäkerhet och exekveringshastighet, för att begränsa deras fotavtryck och möjliggöratillämpningar som till exempel videoströmning i realtid.För att förbättra denna kompromiss studerar vi användningen av bitslicing(eller bit-parallelism) för att genomföra vit-låda. Bitslicing används vanligenför att skriva snabba konstanttidsimplementationer av kryptografiska algoritmeroch kräver syntes av boolska kretsar som implementerar motsvarande funktioner.Syntesen av optimala kretsar för uppslagstabeller är resurskrävande och utförsi allmänhet bara en gång. I ett vit-låda-sammanhang genereras dock mångaslumpmässiga uppslagstabeller vid kompilering, och därför kräver vi attgenereringen av boolska kretsar är tidseffektiv.I denna masteruppsats går vi igenom de befintliga algoritmerna för kretssyntesoch analyserar deras användbarhet i vit-låda-sammanhang. Vi studerar särskilttekniken med binära beslutsdiagram för att generera effektiva kretsar på ettbilligt och anpassningsbart sätt. Vi har implementerat en flexibel version avdenna algoritm som ett C++-bibliotek. Slutligen går vi igenom olika teknikerför att utvärdera de genererade kretsarna och analysera vår algoritms prestandaoch rekommenderar de bästa parametrarna för whitebox-kontexten. / La cryptographie en boîte blanche est connue comme protection pour desalgorithmes cryptographiques s’exécutant dans des environnements contrôléspar l’attaquant. L’approche classique consiste à remplacer les opérations pardes accès à des tables précalculées, ce qui a un coût en performance. Il estdifficile d’obtenir un bon compromis entre sécurité et vitesse d’exécution pourdes applications lourdes telles que la diffusion de contenus vidéos en tempsréel.Le parallélisme au bit ou bitslicing est utilisé en cryptographie traditionnellepour accélérer les implémentations, mais aussi en boîte blanche. Cettetechnique d’implémentation demande la synthèse d’un circuit booléen pourchaque table, recherche qui peut être très coûteuse en temps. En pratique, ilest commun de regénérer régulièrement toutes les tables utilisées dans uneboîte blanche pour renouveler sa défense, ce qui complique l’application dubit-parallélisme.Nous présentons dans cette thèse de master notre effort pour une synthèseefficace de circuits booléens à l’usage de la compilation de boîtes blanchesparallèles au bit. Nous publierons avec cet article une bibliothèque C++ etun module de compilation LLVM pour l’écriture d’implémentation bitslicée,avec un objectif de performance et de lisibilité.
367

Analog and Digital Array Processor Realization of a 2D IIR Beam Filter for Wireless Applications

Joshi, Rimesh M. 01 February 2012 (has links)
No description available.
368

A 10-Bit Dual Plate Sampling Capacitive DAC with Auto-Zero On-Chip Reference Voltage Generation

Gaddam, Ravi Shankar 01 November 2012 (has links)
No description available.
369

Improved Stereo Vision Methods for FPGA-Based Computing Platforms

Fife, Wade S. 28 November 2011 (has links) (PDF)
Stereo vision is a very useful, yet challenging technology for a wide variety of applications. One of the greatest challenges is meeting the computational demands of stereo vision applications that require real-time performance. The FPGA (Field Programmable Gate Array) is a readily-available technology that allows many stereo vision methods to be implemented while meeting the strict real-time performance requirements of some applications. Some of the best results have been obtained using non-parametric stereo correlation methods, such as the rank and census transform. Yet relatively little work has been done to study these methods or to propose new algorithms based on the same principles for improved stereo correlation accuracy or reduced resource requirements. This dissertation describes the sparse census and sparse rank transforms, which significantly reduce the cost of implementation while maintaining and in some case improving correlation accuracy. This dissertation also proposes the generalized census and generalized rank transforms, which opens up a new class of stereo vision transforms and allows the stereo system to be even more optimized, often reducing the hardware resource requirements. The proposed stereo methods are analyzed, providing both quantitative and qualitative results for comparison to existing algorithms. These results show that the computational complexity of local stereo methods can be significantly reduced while maintaining very good correlation accuracy. A hardware architecture for the implementation of the proposed algorithms is also described and the actual resource requirements for the algorithms are presented. These results confirm that dramatic reductions in hardware resource requirements can be achieved while maintaining high stereo correlation accuracy. This work proposes the multi-bit census, which provides improved pixel discrimination as compared to the census, and leads to improved correlation accuracy with some stereo configurations. A rotation-invariant census transform is also proposed and can be used in applications where image rotation is possible.
370

Friction Bit Joining of 5754 Aluminum to DP980 Ultra-High Strength Steel: A Feasibility Study

Weickum, Britney 07 July 2011 (has links) (PDF)
In this study, the dissimilar metals 5754 aluminum and DP980 ultra-high strength steel were joined using the friction bit joining (FBJ) process. The friction bits were made using one of three steels: 4140, 4340, or H13. Experiments were performed in lap shear, T-peel, and cross tension configurations, with the 0.070" thick 5754 aluminum alloy as the top layer through which the friction bit cut, and the 0.065" thick DP980 as the bottom layer to which the friction bit welded. All experiments were performed using a computer controlled welding machine that was purpose-built and provided by MegaStir Technologies. Through a series of designed experiments (DOE), weld processing parameters were varied and controlled to determine which parameters had a significant effect on weld strength at a 95% confidence level. The parameters that were varied included spindle rotational speeds, Z-command depths, Z-velocity plunge rates, dwell times, and friction bit geometry. Maximum lap shear weld strengths were calculated to be 1425.4lbf and were to be obtained using a bit tip length at 0.175", tip diameter at 0.245", neck diameter at 0.198", cutting and welding z-velocities at 2.6"/min, cutting and welding RPMs at 550 and 2160 respectively, cutting and welding z-commands at -0.07" and -0.12" respectively, cooling dwell at 500 ms, and welding dwell at 1133.8 ms. These parameters were further refined to reduce the weld creation time to 1.66 seconds. These parameters also worked well in conjunction with an adhesive to form weld bonded samples. The uncured adhesive had no effect on the lap shear strengths of the samples. Using the parameters described above, it was discovered that cross tension and T-peel samples suffered from shearing within the bit that caused the samples to break underneath the flange of the bit during testing. Visual inspection of sectioned welds indicated the presence of cracking and void zones within the bit.

Page generated in 0.0361 seconds