• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 163
  • 42
  • 20
  • 19
  • 18
  • 17
  • 13
  • 4
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • Tagged with
  • 353
  • 61
  • 54
  • 34
  • 32
  • 31
  • 22
  • 20
  • 19
  • 18
  • 18
  • 18
  • 17
  • 16
  • 16
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
231

Design methodologies for built-in testing of integrated RF transceivers with the on-chip loopback technique

Onabajo, Marvin Olufemi 15 May 2009 (has links)
Advances toward increased integration and complexity of radio frequency (RF) andmixed-signal integrated circuits reduce the effectiveness of contemporary testmethodologies and result in a rising cost of testing. The focus in this research is on thecircuit-level implementation of alternative test strategies for integrated wirelesstransceivers with the aim to lower test cost by eliminating the need for expensive RFequipment during production testing.The first circuit proposed in this thesis closes the signal path between the transmitterand receiver sections of integrated transceivers in test mode for bit error rate analysis atlow frequencies. Furthermore, the output power of this on-chip loopback block wasmade variable with the goal to allow gain and 1-dB compression point determination forthe RF front-end circuits with on-chip power detectors. The loopback block is intendedfor transceivers operating in the 1.9-2.4GHz range and it can compensate for transmitterreceiveroffset frequency differences from 40MHz to 200MHz. The measuredattenuation range of the 0.052mm2 loopback circuit in 0.13µm CMOS technology was 26-41dB with continuous control, but post-layout simulation results indicate that theattenuation range can be reduced to 11-27dB via optimizations.Another circuit presented in this thesis is a current generator for built-in testing ofimpedance-matched RF front-end circuits with current injection. Since this circuit hashigh output impedance (>1k up to 2.4GHz), it does not influence the input matchingnetwork of the low-noise amplifier (LNA) under test. A major advantage of the currentinjection method over the typical voltage-mode approach is that the built-in test canexpose fabrication defects in components of the matching network in addition to on-chipdevices. The current generator was employed together with two power detectors in arealization of a built-in test for a LNA with 14% layout area overhead in 0.13µm CMOStechnology (<1.5% for the 0.002mm2 current generator). The post-layout simulationresults showed that the LNA gain (S21) estimation with the external matching networkwas within 3.5% of the actual gain in the presence of process-voltage-temperaturevariations and power detector imprecision.
232

Robust Clock Synchronization in Wireless Sensor Networks

Saibua, Sawin 2010 August 1900 (has links)
Clock synchronization between any two nodes in a Wireless Sensor Network (WSNs) is generally accomplished through exchanging messages and adjusting clock offset and skew parameters of each node’s clock. To cope with unknown network message delays, the clock offset and skew estimation schemes have to be reliable and robust in order to attain long-term synchronization and save energy. A joint clock offset and skew estimation scheme is studied and developed based on the Gaussian Mixture Kalman Particle Filter (GMKPF). The proposed estimation scheme is shown to be a more flexible alternative than the Gaussian Maximum Likelihood Estimator (GMLE) and the Exponential Maximum Likelihood Estimator (EMLE), and to be a robust estimation scheme in the presence of non-Gaussian/nonexponential random delays. This study also includes a sub optimal method called Maximum Likelihood-like Estimator (MLLE) for Gaussian and exponential delays. The computer simulations illustrate that the scheme based on GMKPF yields better results in terms of Mean Square Error (MSE) relative to GMLE, EMLE, GMLLE, and EMLLE, when the network delays are modeled as non-Gaussian/non-exponential distributions or as a mixture of several distributions.
233

Three-Dimensional Finite Element Analysis of Three-Roll Planetary Mill Processes

Chang, Ming-Hu 26 July 2001 (has links)
The purpose of this study is to investigate the plastic deformation behavior of a round bar at the roll-gap during the rolling process of a three-roll planetary mill. The analysis is carried out with the aid of a finite element program MARC adopting the large deformation - large strain theory and the updated lagrangian formulation (ULF). A mesh rezoning procedure is also adopted to improve the uncontrollable running error of elements turning inside out. The mesh system of the whole bar billet is created by using three-dimensional brick elements, and the three-dimensional elastic-plastic finite element model in MARC is chosen to perform the simulations of three-roll planetary rolling processes. The simulation examples consist of three groups. Firstly, three different friction coefficients are adopted to investigate the rolling process. Secondly, totally five different offset angles are used during the simulation of the rolling process. Finally, five different profiles of the roller are used to study the simulation of the rolling process. The successfully obtained numerical results, including equivalent von-Mises stress and plastic strain distributions, rolling force, rolling moment, billet speeds at the entrance and exit planes of the roll-gap are useful in designing the pass schedules of the three-roll planetary rolling processes.
234

Coupling Efficiency of Graded-Index Polymer Optical Fiber

Liu, Chia-i 25 July 2009 (has links)
The effects of geometry parameters of graded-index polymer optical fiber (GI-POF) components on the coupling efficiency and signal mixed proportion are studied in this thesis. Simulation and experimental approaches are used to investigate the effects of light sources on the coupling efficiency of misalighment, Y-couplers and V-groove couplers. Two different light sources are employed in this study: Laser diode (LD) and vertical-cavity surface-emitting laser (VCSEL). The optimum coupling angle and refractive index of filler in the Y-coupler are studied with a light-emitting diode (LED) light source. A good agreement between the simulation and the experiment results is shown in this work. Furthermore, two V-groove array arrangements, i.e. the parallel V-groove array and the skew V-groove array, are proposed in this study to mix multi-light-sources. The optimum parameters of V-groove are designed to achieve the highest coupling efficiency. The performances of different V-groove array arrangements have also been demonstrated for multi-signal mixing.
235

Self-interference handling in OFDM based wireless communication systems [electronic resource] / by Tevfik Yücek.

Yücek, Tevfik. January 2003 (has links)
Title from PDF of title page. / Document formatted into pages; contains 93 pages. / Thesis (M.S.E.E.)--University of South Florida, 2003. / Includes bibliographical references. / Text (Electronic thesis) in PDF format. / ABSTRACT: Orthogonal Frequency Division Multiplexing (OFDM) is a multi-carrier modulation scheme that provides efficient bandwidth utilization and robustness against time dispersive channels. This thesis deals with self-interference, or the corruption of desired signal by itself, in OFDM systems. Inter-symbol Interference (ISI) and Inter-carrier Interference (ICI) are two types of self-interference in OFDM systems. Cyclic prefix is one method to prevent the ISI which is the interference of the echoes of a transmitted signal with the original transmitted signal. The length of cyclic prefix required to remove ISI depends on the channel conditions, and usually it is chosen according to the worst case channel scenario. Methods to find the required parameters to adapt the length of the cyclic prefix to the instantaneous channel conditions are investigated. / ABSTRACT: Frequency selectivity of the channel is extracted from the instantaneous channel frequency estimates and methods to estimate related parameters, e.g. coherence bandwidth and Root-mean-squared (RMS) delay spread, are given. These parameters can also be used to better utilize the available resources in wireless systems through transmitter and receiver adaptation. Another common self-interference in OFDM systems is the ICI which is the power leakage among different sub-carriers that degrades the performance of both symbol detection and channel estimation. Two new methods are proposed to reduce the effect of ICI in symbol detection and in channel estimation. The first method uses the colored nature of ICI to cancel it in order to decrease the error rate in the detection of transmitted symbols, and the second method reduces the effect of ICI in channel estimation by jointly estimating the channel and frequency offset, a major source of ICI. / System requirements: World Wide Web browser and PDF reader. / Mode of access: World Wide Web.
236

Model Predictive Control for Automotive Engine Torque Considering Internal Exhaust Gas Recirculation

Hayakawa, Yoshikazu, Jimbo, Tomohiko 09 1900 (has links)
the 18th World Congress The International Federation of Automatic Control, Milano (Italy), August 28 - September 2, 2011
237

Développement d'un cadre méthodologique pour l'évaluation de l'équivalence écologique : Application dans le contexte de la séquence "Éviter, Réduire, Compenser" en France / Development of a methodological framework to assess ecological equivalence : application in the context of the mitigation hierarchy in France

Bezombes, Lucie 07 December 2017 (has links)
Face à l’érosion mondiale de la biodiversité causée par les activités humaines, la compensation écologique, et plus largement la séquence « Eviter Réduire Compenser » (ERC), s’est développée depuis les années 1970, avec l’ambition de concilier développement au niveau des projets d’aménagement et préservation de la biodiversité. L’objectif de cette séquence est d’atteindre « zéro perte nette » (No net Loss, NNL) de biodiversité. Un des enjeux clé pour y arriver consiste à démontrer l’équivalence écologique entre les gains apportés par la compensation et les pertes occasionnées par les impacts. Malgré les avancées règlementaires, le cadre français n’inclut pas de méthode à suivre pour déterminer l’équivalence et aucune n’est unanimement reconnue. Cela amène à des pratiques hétérogènes et une difficulté d’atteindre le NNL. Dans ce contexte, ces travaux de thèse visent à développer un cadre méthodologique standardisé (CMS) d’évaluation de l’équivalence, combinant à la fois opérationnalité, bases scientifiques et exhaustivité (prise en compte des quatre dimensions de l’équivalence : écologique, spatiale, temporelle et les incertitudes). Dans un premier temps, 13 méthodes utilisées à l’étranger sont analysées afin d’identifier des éléments structurant pour le développement du CMS adapté au contexte français. La construction du CMS est décomposée en trois étapes. La première consiste à sélectionner un lot organisé d’indicateurs sur lesquels baser l’évaluation de l’équivalence, permettant de répondre aux exigences règlementaires et reflétant la complexité de la biodiversité : évaluation à deux échelles spatiales (sur le site et dans un périmètre élargi) et à trois niveaux d’enjeu (général, habitat ou espèce). La deuxième étape porte sur la prédiction de l’évolution dans le temps des valeurs initiales des indicateurs, sous l’effet des impacts et de la compensation, en prenant en compte les incertitudes associées. La troisième étape conduit à la détermination de règles de calcul des pertes et des gains aboutissant à l’évaluation globale de l’équivalence. Le CMS ainsi construit est ensuite testé sur deux sites d’étude afin d’en démontrer la plus-value et d’en éprouver les limites. Des perspectives d’amélioration du CMS, et plus largement de l’évaluation de l’équivalence sont dégagées. En dernier lieu, l’ensemble de ces éléments nous permettent de questionner l’efficacité de la compensation écologique pour lutter contre l’érosion de la biodiversité. / In light of the global erosion of biodiversity caused by human activities, biodiversity offsets and, more broadly the Mitigation Hierarchy, are increasingly used since the 1970s, with the ambition of reconciling economic development and biodiversity conservation. Its objective is to achieve "No Net Loss" (NNL) of biodiversity. One of the key issues to achieve this goal is to demonstrate ecological equivalence between the gains from offsets and the losses caused by impacts. Despite regulatory improvements, the French law does not include a method for assessing equivalence, and no method is unanimously recognized. This leads to heterogeneous practices and difficulties in reaching the NNL objective. In this context, this thesis aims to develop a standardized methodological framework (SMF) for assessing equivalence, which combines operationality, scientific basis and comprehensiveness (taking into account the four dimensions of equivalence: ecological, spatial, temporal and uncertainties). First, 13 methods used abroad are analysed in order to identify structural elements for the development of a SMF adapted to the French context. The construction is decomposed into three steps. The first consists in selecting an organized set of indicators, on which equivalence assessment should be based in order to meet legislative requirements and reflect the complexity of biodiversity. The assessment is to be done at two spatial scales (on-site and within an expanded perimeter) and at three levels reflecting general or specific issues (habitat or species). The second step regards the prediction of the values of the indicators over time, consequently to the impacts and offsets, taking into account the implied uncertainties. The third step leads us to establish rules for calculating losses and gains, as well as for the overall assessment of equivalence. Finally, this SMF is tested on two study sites in order to demonstrate the added value and to identify its limits. Prospects for improving the SMF, and more broadly the evaluation of equivalence, are then suggested. Finally, all these elements make it possible to question the effectiveness of offsets in order to tackle biodiversity erosion.
238

Teknisk och visuell kvalitetsutvärdering av fyrfärgsprintrar och offset / Technical and visual evaluation of print quality in four colour digital printers and offset

Beijer-Olsen, Anna, Björsund, Emma January 2005 (has links)
Stora Enso är ett integrerat pappers-, paketerings- och skogsföretag som producerar finpapper, kartong samt träprodukter. Denna studie har genomförts på uppdrag av Stora Enso och utvärderar samt utreder existerande korrelationer mellan olika kvalitetsparametrar samt upplevd bildkvalitet. Det är oklart vad som påverkar den upplevda bildkvaliteten i störst utsträckning och Stora Enso ville därmed få ökad kunskap angående vilka kvalitetsparametrar som resulterar i ett gott tryckresultat. I och med digitalpressarnas utveckling introduceras papperskvaliteter med varierande egenskaper på marknaden och Stora Enso vill erhålla kunskap angående olika papperskvaliteters prestationer kvalitetsmässigt vid tryckning i olika tryckpressar. I denna studie baserades mätningarna och analyserna, tryck- och papperstekniska samt subjektiva, på elva olika papperskvaliteter tryckta i tre tryckpressar, DC2060, iGen3 samt offset. Rapporten redovisar resultat för respektive press- och papperskombinations prestationer och eventuella korrelationer med upplevd bildkvalitet kartläggs. Studien har visat att tryckresultatet på obestrukna papperskvaliteter påverkas av ett flertal parametrar. Mätvärden för densitet, tryckglans, färgomfångsvolym samt färgkvalitet i fulltonsytor korrelerar väl med den upplevda bildkvaliteten. Vid tryckning av obestrukna papperskvaliteter är det viktigt att välja en papperskvalitet med önskade egenskaper för tryckningen i fråga. Studien påvisar att flammighet, jämnhet i tryck samt glansvariationer inte påverkar utvärderingen av tryckresultatet vid en visuell bedömning. För bestrukna papperskvaliteter korrelerar inte några mättekniska parametrar med det upplevda bildkvalitetsresultatet på ett tydligt sätt. Detta tyder på att oavsett vilken bestruken papperskvalitet som används vid tryckning i DC2060, iGen3 eller offset så erhålls liknande tryckresultat. Detta är anmärkningsvärt då priserna på bestrukna papperskvaliteter varierar kraftigt. De papperstekniska parametrar som verkar påverka det upplevda tryckresultatet på både obestrukna och bestrukna papperskvaliteter är först och främst papprets ytråhet samt vithet. Dessa parametrar påverkar tillsammans färgomfånget, som i sin tur påverkar färgkvaliteten i tryck. Densiteten och tryckglansen påverkas av papperets ytråhet och det var därmed inte oväntat att ytråheten och vitheten också skulle påverka den upplevda bildkvaliteten. När det gäller pressarnas prestationer blev resultatet att iGen3 gav ett tryckresultat, oavsett papperskvalitet, som var mycket jämnt. Det kunde därmed konstateras att tryckpressen är mycket okänslig för val av papperskvalitet. DC2060 erhöll mycket höga värden för exempelvis densitet, tryckglans och färgomfångsvolym. Det förekom dock små variationer i tryckresultat. Tryckningar utförda i offset erhöll mycket låga flammighetsstörningar och låg ojämnhet i tryck. På bestrukna papperskvaliteter gav DC2060 och offset färre variationer i tryckresultat än på obestrukna papperskvaliteter.
239

LTE SYSTEM ARCHITECTURE FOR COVERAGE AND DOPPLER REDUCTION IN RANGE TELEMETRY

Kogiantis, Achilles, Rege, Kiran, Triolo, Anthony A. 10 1900 (has links)
A novel approach employing 4G LTE Cellular Technology for Test Range Telemetry is presented. Providing aeronautical mobile telemetry using commercial off the shelf (COTS) cellular equipment poses many challenges, including: Three-dimensional (3D) coverage, need for uninterrupted high data throughputs, and very high Doppler speeds of the Test Articles (TA). Each of these requirements is difficult to meet with a standard cellular approach. We present a novel architecture that provides 3D coverage over the span of a test range, allowing the TA to establish a radio link with base stations that have a manageable Doppler due to the reduced projected TA speed on the radio link line. Preliminary results illustrate that a variety of flight plans can be accommodated with commercial LTE technology by employing LTE’s mobility mechanisms and adding centralized control. The resulting network architecture and Radio Access Network topology allow very high throughputs to be delivered throughout the test range with a judicious placement of base stations.
240

Variance Adaptive Quantization and Adaptive Offset Selection in High Efficiency Video Coding

Abrahamsson, Anna January 2016 (has links)
Video compression uses encoding to reduce the number of bits that are used forrepresenting a video file in order to store and transmit it at a smaller size. Adecoder reconstructs the received data into a representation of the original video.Video coding standards determines how the video compression should beconducted and one of the latest standards is High Efficiency Video Coding (HEVC).One technique that can be used in the encoder is variance adaptive quantizationwhich improves the subjective quality in videos. The technique assigns lowerquantization parameter values to parts of the frame with low variance to increasequality, and vice versa. Another part of the encoder is the sample adaptive offsetfilter, which reduces pixel errors caused by the compression. In this project, thevariance adaptive quantization technique is implemented in the Ericsson researchHEVC encoder c65. Its functionality is verified by subjective evaluation. It isinvestigated if the sample adaptive offset can exploit the adjusted quantizationparameters values when reducing pixel errors to improve compression efficiency. Amodel for this purpose is developed and implemented in c65. Data indicates thatthe model can increase the error reduction in the sample adaptive offset. However,the difference in performance of the model compared to a reference encoder is notsignificant.

Page generated in 0.0205 seconds